- Oil steadies on reports of US-Russia deal, ends week about 5% lower Reuters
- Oil ticks down on reports of US-Russia deal Reuters
- Brent Set for Weekly Decline TradingView
- WTI tumbles to below $63.00 as tariff concerns mount Mitrade
- Oil Updates — crude set for steepest weekly losses since June Arab News
Category: 3. Business
-
Oil steadies on reports of US-Russia deal, ends week about 5% lower – Reuters
-
August USPTO subject-matter eligibility (SME) reminder memo – helpful for computer-implemented inventions and software inventions | Canada | Global law firm
Commissioner Kim issued an August 4 memorandum (Kim Memo) that clarifies how examiners should apply existing SME guidance, especially when analyzing artificial intelligence, machine learning, software, or embedded software (e.g., hardware control) claims. The memo was directed to Technology Centers 2100 (Computer Architecture Software and Information Security), 2600 (Communications), and 3600 (eCommerce).
For practitioners, this is a very useful guide for rebutting § 101 rejections and has several paragraphs to support reconsideration and argumentation. The Kim Memo narrows the circumstances under which examiners may invoke the mental-process abstract-idea category, emphasizing that claim limitations that cannot practically be performed in the human mind should not be classified as mental processes, and also provides guidance on integration into a practical application, and advises against oversimplification of the claims.
This is particularly relevant guidance for machine learning operations that involve complex computations, especially in view of a recent Federal Circuit decision and subsequent unfavourable Patent Trial and Appeal Board decisions citing the decision.
Introduction
The memorandum instructs examiners to carefully distinguish between claims that merely involve a judicial exception and those that actually recite it. This distinction is crucial because claims that only involve an abstract idea do not require further eligibility analysis under Step 2A Prong One. Additionally, the Kim Memo stresses that Step 2A Prong Two must consider all claim limitations as a whole, focusing on how these limitations interact to integrate the judicial exception into a practical application. This holistic approach prevents piecemeal treatment and ensures the totality of the system architecture, data flows, or hardware cooperation is evaluated.
Examiners are cautioned against over-reliance on the “apply it” rationale, which oversimplifies claim limitations and fails to respect the technical particulars of the implementation.
The memorandum directs that a § 101 rejection should only be issued when it is more likely than not the claim is ineligible, reinforcing the preponderance standard. This means if the factual predicates are debatable, the uncertainty mandates withdrawal of the § 101 rejection.
Each of these points provides valuable advocacy supports for applicants seeking to establish patent-eligible subject matter, offering a robust
for demonstrating that their claims satisfy the requirements of the Alice/USPTO SME framework.
Relevant memo excerpts
Concluding remarks
The Kim Memo provides applicants with authoritative language to help respond to overly expansive § 101 rejections in relation to technological advances embedded in AI and software innovations. By weaving these passages into drafting and prosecution strategies, practitioners can more persuasively demonstrate that their claims satisfy each stage of the Alice / USPTO subject matter eligibility framework. The Kim Memo guidance is helpful both for drafting patent applications and preparing responses to examiner rejections.
Continue Reading
-
Nasdaq posts record closing high with tech gains, rate cut optimism – Reuters
- Nasdaq posts record closing high with tech gains, rate cut optimism Reuters
- Markets News, Aug. 8, 2025: Nasdaq Closes at Record High as Apple Leads Tech Stock Rally; Major Indexes Post Solid Weekly Gains Investopedia
- Stocks Climb on Hopes for Russia Deal as Oil Falls: Markets Wrap Bloomberg
- Strong Earnings Power US Stocks Higher Despite Sector Hiccups Finimize
- Stock Futures Rise, Dollar Drops After Trump Moves to Remake the Fed Barron’s
Continue Reading
-
Kaseya Strengthens Presence in the Nordics with Upstream AB Partnership
Company will leverage Upstream AB’s AI capabilities to accelerate innovation in intelligent automation
Miami, FL, August 8, 2025 — Kaseya, the leading global provider of AI-powered IT management and cybersecurity software, today announced its partnership with Upstream AB, a Stockholm-based IT solutions provider serving MSPs and internal IT departments across the Nordics. Upstream AB has a proven track record of delivering best-in-class solutions alongside exceptional Swedish-language support and strategic guidance.
“Upstream is a fantastic addition to the Kaseya ecosystem,” said Dermot McCann, Executive Vice President and General Manager, EMEA at Kaseya. “They’ve earned the trust of leading MSPs and enterprises across the Nordics by combining world-class technology with deep local service. Together, we’re now able to deliver even greater value, faster innovation and stronger partnership to customers across Sweden and the broader region.”
Upstream AB’s partners will gain direct access to Kaseya 365 with integrated solutions that streamline workflows with automation —delivered with seamless local support and onboarding. Customers will also benefit from Upstream’s AI Center of Excellence, built on over 20 years of automation expertise in the RMM space. This capability will accelerate our innovation in intelligent automation and further enhance the power of Kaseya 365.
Customers will see enhanced access to new products, resources and support channels moving forward. McCann continued, “This partnership demonstrates Kaseya’s continued investment in Europe and its long-term strategy to empower local MSPs with global capabilities, while preserving the high-touch service and regional expertise that customers value.”
The Upstream AB team will continue to operate from Stockholm as part of Kaseya’s growing EMEA infrastructure.
About Kaseya
Kaseya is the leading global provider of AI-powered IT management and cybersecurity software. Its Kaseya 365 platform is purpose-built to meet the demands of multifunctional IT professionals, offering a unified solution to manage infrastructure, secure endpoints, back up critical data and streamline operations. Serving nearly 50,000 MSPs and IT departments across more than 170 countries, Kaseya’s portfolio includes trusted brands such as Datto, Unitrends, IT Glue, ConnectBooster, Spanning Cloud Apps, RapidFire Tools and more. Headquartered in Miami, Florida, Kaseya is privately held with operations in more than a dozen countries. To learn more, visit https://www.kaseya.com/.About Upstream AB
Founded in 1998 and headquartered in Stockholm, Upstream AB is a value-added distributor of IT management, cybersecurity and business continuity solutions. Serving organizations across Sweden, the company is known for its proactive service model, local language support, and broad vendor portfolio tailored to MSPs and internal IT departments.Continue Reading
-
DARPA unveils winners of AI challenge to boost critical infrastructure cybersecurity
LAS VEGAS — The Defense Advanced Research Projects Agency on Friday announced the winners of its AI Cyber Challenge, or AIxCC, a two-year-long competition that evaluates AI models built to autonomously identify and patch vulnerabilities in open-source code used in critical infrastructure systems.
Team Atlanta, which includes experts from the Georgia Institute of Technology, Samsung Research, the Korea Advanced Institute of Science & Technology and the Pohang University of Science and Technology, won first place, DARPA announced at the DEF CON hacker convention in Las Vegas, Nevada.
Trail of Bits, a New York City-based small business, won second place. And Theori, a team of AI researchers and security professionals in the U.S. and South Korea, won third place.
Four of the models developed by the seven competing finalist teams have already been made available for use, while three others will become available in the coming weeks, DARPA director Stephen Winchell told a large audience at the convention, where the winners were announced.
“We’re living in a world right now that has ancient digital scaffolding that’s holding everything up. A lot of the code bases, a lot of the languages, a lot of the ways we do business — and everything we’ve built on top of it — is all incurred huge technical debt over the years,” Winchell said. “And the reality is [that] it is a problem that is beyond human scale, and it’s a critical problem that we need to solve right now.”
Open-source tools are free to use and implement, making them convenient for critical infrastructure owners and operators. But they’re particularly vulnerable to cyber exploitation because of the nature of their publicly available code bases. If hackers succeed in infiltrating a code base and leveraging a flaw, it could create cascading impacts on public health and safety.
The two-year competition was partly fueled by the advent of large language models that power popular consumer-facing generative AI tools. Many of the major companies that have rolled out such offerings, like Anthropic and OpenAI, provided their model infrastructure to competitors. The goal of the contest, in essence, was to mesh AI tooling into models that can automatically patch vulnerabilities in open-source code and deploy it at scale to those who may be vulnerable.
The teams at AIxCC uncovered 70 synthetic vulnerabilities built for the competition, along with 18 previously unknown real-world flaws. The latter were not planted in advance and were discovered during the teams’ scans. On average, their models patched flaws in just 45 minutes.
“The teams figured out how to use this technology in better, more innovative ways,” said Andrew Carney, the program manager for AIxCC, speaking on stage at DEF CON. “They also found way more real-world issue bugs — real vulnerabilities — that we are in the process of disclosing to maintainers.”
Open-source projects — which underpin software systems used everywhere — rely on contributions from community members to keep them updated with patches. The updates are often discussed on forums with volunteer software maintainers, who chat with one another about proposed changes.
Historically, community practices have operated under the premise that all contributors are benevolent. That notion was challenged last February when a user dubbed “Jia Tan” tried to quietly plant a backdoor into XZ Utils, a file transfer tool used in several Linux builds that power software in leading global companies.
DARPA and the Advanced Research Projects Agency for Health also distributed an additional $1.4 million in funds to help with additional implementation. The cost per successfully completed competition task was $152, a number that falls significantly below human labor costs.
“Today, the world is different” because the competition has “fundamentally changed our understanding of what is possible in terms of automatically finding or really, more importantly, fixing vulnerabilities in software,” Kathleen Fisher, DARPA’s Information Innovation Office director, told reporters at a press conference on the sidelines of DEF CON.
Continue Reading
-
Nasdaq Hits Highs as Apple Has Best Week Since ‘20: Markets Wrap
(Bloomberg) — Stocks saw their best week since June, with a rally in big tech driving the Nasdaq 100 to all-time highs. Also buoying sentiment were hopes the US and Russia will reach a deal to halt the war in Ukraine. Gold whipsawed.
The S&P 500 approached 6,400, closing on the brink of a record. Apple Inc. saw its best week since 2020 amid optimism that plans to spend an additional $100 billion on domestic manufacturing may help the company avoid tariffs. Fannie Mae and Freddie Mac soared on reports the US is preparing to sell shares in an offering that could start as early as this year.
Subscribe to the Stock Movers Podcast on Apple, Spotify and other Podcast Platforms.
The yield on 10-year Treasuries rose three basis points to 4.28%. The dollar barely budged. Oil fluctuated. The Trump administration suggested it would issue a new policy clarifying that imports of gold bars should not face tariffs.
Donald Trump announced that he plans to meet “very shortly” with Russian President Vladimir Putin, as the US president looks to broker a ceasefire agreement in hopes of bringing an end to the war in Ukraine.
Bret Kenwell at eToro said momentum has been strong in equities, with both technicals and fundamentals working in bulls’ favor.
“While an unexpected risk could develop in the second half of 2025, earnings have been better-than-expected and the Fed is inching closer to lower interest rates,” he noted. “As long as the economy holds up, there are catalysts in play for stocks to continue higher.”
Trump said tariffs are “having a huge positive impact on the stock market,” adding that “almost every day, new records are set.” Hundreds of billions of dollars are “pouring into our country’s coffers,” he noted.
“Markets rebounded strongly this week with a clear ‘buy on the dip’ mentality,” said Florian Ielpo at Lombard Odier Investment Managers. “While market sentiment appeared to be waning last week, with subdued reactions to earnings beats, this week clearly demonstrated a different trend.”
And that begs the question: are we close to a solid ceiling?
“Our risk appetite indicator shows improvement from last week, but clearly has room to grow,” Ielpo said.
At Piper Sandler, Craig Johnson says that while the summer doldrums often lead to modest pullbacks in August and September, investors who have doubted this rally are now forced to “buy the dips… and not sell the rips.”
Despite the solid rebound, nearly $28 billion was redeemed from US stocks in the week through Aug. 6, while money market funds attracted about $107 billion, according to a Bank of America Corp. note citing EPFR Global data.
“With the major indexes at or near record highs, valuations are rich, and stock selection and diversification are more important than ever,” said Daniel Skelly at Morgan Stanley’s Wealth Management Market Research & Strategy Team.
On the macro front, BofA’s Michael Hartnett said a majority of the bank’s clients are betting on a “Goldilocks” outcome, which implies an economy that’s running neither too hot nor too cold. He said investors expect a scenario where lower rates would fuel a rally in equities.
Kenwell at eToro says that it would be a healthy price action for stocks to consolidate after a big rally — either by pulling back or digesting the move by trading sideways.
“This pullback would likely be viewed as an opportunity for investors to buy the dip rather than run for the hills,” Kenwell said.
“We believe stocks will stay supported amid solid fundamentals, but fresh headlines in the coming week may challenge investor sentiment that remains vulnerable to tariff, economic, and geopolitical risks,” said Ulrike Hoffmann-Burchardi at UBS Global Wealth Management.
The next significant directional move in the market will be driven by fundamentals, either through macro resilience driving earnings estimates higher or further cracks in the labor market driving increased recession concerns, according to Mark Hackett at Nationwide.
“Given the moderation in technical indicators and the sluggish seasonal shift, a period of consolidation is not unexpected or unhealthy,” he said.
Federal Reserve Bank of St. Louis President Alberto Musalem said he supported last week’s decision by policymakers to leave interest rates steady, adding the US central bank is still missing more on the inflation side of its mandate.
Traders will soon shift their focus to next week’s release of US inflation numbers for clues on the Fed’s next steps.
“We expect the July CPI report to show that core inflation gained additional momentum,” according to strategists at TD Securities.
Corporate Highlights:
Meta Platforms Inc. has selected Pacific Investment Management Co. and Blue Owl Capital Inc. to lead a $29 billion financing for its data center expansion in rural Louisiana as the race for artificial intelligence infrastructure heats up, according to people with knowledge of the matter. Tesla Inc. is disbanding its Dojo team and its leader will leave the company, according to people familiar with the matter, upending the automaker’s effort to build an in-house supercomputer for developing driverless-vehicle technology. Intel Corp. Chief Executive Officer Lip-Bu Tan said he’s got the full backing of the company’s board, responding for the first time to US President Donald Trump’s call for his resignation over conflicts of interest. SoftBank Group Corp. is the buyer taking ownership of Foxconn Technology Group’s electric vehicle plant in Ohio, a move aimed at kick-starting the Japanese company’s $500 billion Stargate data center project with OpenAI and Oracle Corp. Taiwan Semiconductor Manufacturing Co. reported a 26% growth spurt in July, adding to evidence of accelerating spending on artificial intelligence. Expedia Group Inc. raised its full-year sales target after reporting strong second-quarter bookings, fueled mainly by its enterprise business as well as improved demand from US consumers. Pinterest Inc. reported second-quarter sales that beat analysts’ expectations, but earnings for the second quarter were less than Wall Street expected and user growth in the US and Canada, the company’s most lucrative market, was flat. Under Armour Inc. forecast worse-than-expected sales and profit for the current quarter, stalling a turnaround plan that was taking hold. Gilead Sciences Inc. lifted its full-year outlook after strong HIV drug sales in the second quarter helped revenue and earnings modestly beat analyst expectations. Wendy’s Co. cut its full-year sales guidance after posting a bigger-than-expected quarterly decline, highlighting the economic pressures weighing on the chain’s US business. Instacart posted its strongest order growth since 2022 for a second straight quarter and beat earnings estimates for the current period, a sign of resilience in its core delivery business after it rolled out initiatives to cater to price-conscious consumers. Trade Desk Inc. reported second-quarter results that spurred multiple downgrades. Firms note growing concerns about competition from Amazon.com Inc. Sweetgreen Inc. slashed its sales guidance after a second straight quarter of disappointing results, highlighting the salad chain’s struggles to sell $15 salads to budget-strained diners. What Bloomberg Strategists say…
“An improving geopolitical backdrop has become a headwind for oil prices, especially as peace in Ukraine looks closer. Traders will now increasingly look past geopolitical hurdles, leaving the market uncomfortably exposed to uncertain demand and rising supply.”
—Michael Ball, Macro Strategist, Markets Live
For the full analysis, click here.
Some of the main moves in markets:
Stocks
The S&P 500 rose 0.8% as of 4 p.m. New York time The Nasdaq 100 rose 0.9% The Dow Jones Industrial Average rose 0.5% The MSCI World Index rose 0.7% Bloomberg Magnificent 7 Total Return Index rose 1.6% The Russell 2000 Index rose 0.2% Currencies
The Bloomberg Dollar Spot Index was little changed The euro fell 0.2% to $1.1643 The British pound was little changed at $1.3450 The Japanese yen fell 0.4% to 147.75 per dollar Cryptocurrencies
Bitcoin fell 0.7% to $116,464.8 Ether rose 4.8% to $4,062.95 Bonds
The yield on 10-year Treasuries advanced three basis points to 4.28% Germany’s 10-year yield advanced six basis points to 2.69% Britain’s 10-year yield advanced five basis points to 4.60% The yield on 2-year Treasuries advanced three basis points to 3.76% The yield on 30-year Treasuries advanced three basis points to 4.85% Commodities
West Texas Intermediate crude fell 0.4% to $63.64 a barrel Spot gold was little changed ©2025 Bloomberg L.P.
Continue Reading
-
Trump’s team expands search for Fed chair to about 10 names, WSJ reports
Federal Reserve Chairman Jerome Powell conducts a news conference after a meeting of the Federal Open Market Committee on Wednesday, July 30, 2025.
Tom Williams | CQ-Roll Call, Inc. | Getty Images
U.S. President Donald Trump’s team is reviewing new contenders to lead the Federal Reserve once Chair Jerome Powell’s term ends in May, including a longtime economic consultant and a past regional Fed president, the Wall Street Journal reported on Friday.
The 10 or so people on the newly expanded list include former St. Louis Fed President James Bullard and Marc Sumerlin, a former economic adviser to President George W. Bush, WSJ said, citing officials. Trump last week said he had narrowed the list to four.
National Economic Council director Kevin Hassett and former Fed governor Kevin Warsh remain under consideration, along with current Fed governor Christopher Waller, WSJ said. Reuters has previously reported that these three are candidates, but could not immediately verify the rest of the report.
Trump has been criticizing Powell all year for not cutting rates, building on disappointment with the Fed chief that emerged during his first term as president shortly after he elevated Powell to the Fed chair role.
It was not clear what a broader list of candidates would mean for the timing of an appointment. Treasury Secretary Scott Bessent is helping lead the search.
The president moved quickly to name an ally to the Fed Board this week after Fed Governor Adriana Kugler, a Biden appointee who did not support rate cuts, unexpectedly resigned as of the end of this week. Council of Economic Advisers Stephen Miran will serve out the remaining months of Kugler’s term, which ends on January 31.
Trump has indicated a search continues for someone who could fill the Fed Board role for a 14-year term beginning February 1.
Continue Reading
-
Canada sheds tens of thousands of jobs as tariffs dent hiring plans – Reuters
- Canada sheds tens of thousands of jobs as tariffs dent hiring plans Reuters
- Breaking: Canada Unemployment Rate holds steady at 6.9% in July vs. 7% forecast FXStreet
- Instant View: Canada’s economy sheds 40,800 jobs in July WKZO
- Weak July jobs report highlights tough summer for youth but little tariff impact Pique Newsmagazine
- USDCAD technicals: The weaker Canada jobs report has pushed the USDCAD higher. What next? TradingView
Continue Reading
-
Design of an integral sliding mode controller for reducing CO2 emissions in the transport sector to control global warming
Stability analysis examines how the solutions of a system behave as time progresses, particularly in response to changes in initial conditions or parameters. Following are the key steps involved in stability analysis of system Eq. (5).
Local stability
The possible equilibria of model Eq. (5) are given below as
-
1.
(S_{1} left( {frac{Q}{sigma },0,0,frac{Qbeta + Asigma }{{dsigma }}} right)),
-
2.
(S_{2} = left( {frac{{rleft( {Q + Leta } right)}}{Leta theta + rsigma },,frac{{Lleft( {rsigma – Qtheta } right)}}{Leta theta + rsigma },,0,,frac{Qrbeta + Lrbeta eta + ALeta theta – LQtheta lambda + Arsigma + Lrlambda sigma }{{dleft( {Leta theta + rsigma } right)}}} right)),
-
3.
(S_{3} = left( {C_{3} ,,0,,V_{3} ,,G_{3} } right)),
-
4.
(S_{4} = left( {C_{4} ,,N_{4} ,,V_{4} ,,G_{4} } right)),
where
(C_{3} = frac{{Qalpha + L_{1} delta_{1} left( {alpha – omega } right)}}{alpha sigma }), (V_{3} = L_{1} – frac{{L_{1} omega }}{alpha }), (G_{3} = frac{{Qalpha beta + Aalpha sigma + L_{1} beta delta_{1} left( {alpha – omega } right)}}{dalpha sigma }), (C_{4} = frac{{rleft( {left. {Qalpha + Lalpha eta + L_{1} delta_{1} left( alpha right. – omega } right)} right)}}{{alpha left( {Leta theta + rsigma } right)}}), (C_{4} = frac{{Lleft( {left. { – Qalpha theta + ralpha sigma + L_{1} delta_{1} theta left( alpha right. – omega } right)} right)}}{{alpha left( {Leta theta + rsigma } right)}}), and (C_{4} = frac{{Qalpha left( {left. {rbeta – Ltheta lambda } right) + sigma left( {Lrbeta eta + ALeta theta + Arsigma + Lrlambda sigma } right) + L_{1} delta_{1} left( {rbeta – Ltheta lambda } right)left( {alpha – omega } right.} right)}}{{alpha dleft( {Leta theta + rsigma } right)}}).
The local stability at equilibrium points (S_{1}), (S_{2}), (S_{3}) and (S_{4}) is determined through the sign of eigenvalues of the following Jacobian matrix (J) that is defined as
$$J = left[ {begin{array}{*{20}l} { – sigma } hfill & eta hfill & {delta_{1} } hfill & 0 hfill \ { – Ntheta } hfill & {frac{ – Nr}{L} + left( {1 – frac{N}{L}} right) – Ctheta } hfill & 0 hfill & 0 hfill \ 0 hfill & 0 hfill & {frac{ – Valpha }{{L_{1} }} + left( {1 – frac{V}{{L_{1} }}} right)alpha – omega } hfill & 0 hfill \ beta hfill & lambda hfill & 0 hfill & { – d} hfill \ end{array} } right].$$
(6)
Take (J_{i}) (left( {i = 1,2,3,4} right)) Jacobian matrix that is determined at equilibrium point (S_{i} left( {i = 1,2,3,4} right)).
For (S_{1}), the eigenvalues of (J_{1}) are (left( { – d,,, – sigma ,,,frac{rsigma – Qtheta }{sigma },,,alpha – omega } right)), and the system is unstable at this equilibrium point if (alpha > omega). Moreover, the eigenvalues of the (J_{2}) are
$$left( { – d,,frac{{ – b + sqrt {b^{2} – 4ac} }}{2a},frac{{ – b – sqrt {b^{2} – 4ac} }}{2a},alpha – omega } right),$$
(7)
where (a = Leta theta + rsigma), (b = Qrtheta – r^{2} sigma – Leta theta sigma – rsigma^{2}), and (c = – LQeta theta^{2} – Qrtheta sigma + Lreta theta sigma + r^{2} sigma^{2}). Thus, the equilibrium point (S_{2}) is unstable if (alpha > omega).
The characteristic equation of (J_{i} (i = 3,4)) for (S_{i} (i = 3,4)) is given by
$$x^{4} + A_{1} x^{3} + A_{2} x^{2} + A_{3} x + A_{4} = 0,$$
(8)
where
$$A_{1} = frac{1}{{LL_{1} }}left( {dLL_{1} – LL_{1} r + 2L_{1} N_{i} r + LL_{1} sigma + C_{i} LL_{1} sigma + LVleft( {V_{i} – L_{1} } right)alpha omega } right),$$
$$A_{2} = frac{1}{{LL_{1} }}left( begin{gathered} – dLL_{1} r + 2dL_{1} N_{i} r + dLL_{1} sigma + C_{i} dLL_{1} sigma – LLrsigma + 2L_{1} N_{i} rsigma + LL_{1} N_{i} eta sigma hfill \ + C_{i} LL_{1} sigma^{2} + dLV_{i} left( {V_{i} – L_{1} } right)alpha omega – LrV_{i} left( {V_{i} – L_{1} } right)alpha omega + 2N_{i} rV_{i} left( {V_{i} – L_{1} } right)alpha omega hfill \ + LV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega + C_{i} LVleft( {V_{i} – L_{1} } right)alpha sigma omega hfill \ end{gathered} right),$$
$$A_{3} = frac{1}{{LL_{1} }}left( begin{gathered} – dLL_{1} rsigma + 2dL_{1} N_{i} rsigma + dLL_{1} N_{i} eta sigma + C_{i} dLL_{1} sigma^{2} – dLrV_{i} left( {V_{i} – L_{1} } right)alpha omega hfill \ + 2dN_{i} rV_{i} left( {V_{i} – L_{1} } right)alpha omega + dLV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega + C_{i} dLV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega hfill \ – LrV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega + 2N_{i} rV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega + LN_{i} V_{i} left( {V_{i} – L_{1} } right)alpha eta sigma omega hfill \ + C_{i} LV_{i} left( {V_{i} – L_{1} } right)alpha sigma^{2} omega hfill \ end{gathered} right),$$
$$A_{4} = frac{1}{{LL_{1} }}left( begin{gathered} – dLrV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega + 2dN_{i} rV_{i} left( {V_{i} – L_{1} } right)alpha sigma omega hfill \ + dLN_{i} V_{i} left( {V_{i} – L_{1} } right)alpha eta sigma omega + C_{i} dLV_{i} left( {V_{i} – L_{1} } right)alpha sigma^{2} omega hfill \ end{gathered} right),$$
where the coefficients (A_{i} left( {i = 1,2,3,4} right)) are positive. Equation (8) has either negative or positive eigenvalues iff the following Routh–Hurwitz condition is satisfied.
$$A_{3} left( {A_{1} A_{2} – A_{3} } right) – A_{1}^{2} A_{4} > 0$$
(9)
Therefore, the system is stable at (S_{i} (i = 3,4)) if Eq. (9) holds.
Theorem 1
The system in Eq. (5) at equilibriums (S_{1}) and (S_{2}) is always unstable under the condition (alpha > omega). The system at equilibrium points (S_{i}) is locally asymptotically stable iff Eq. (9) is hold.
Global stability
Now, stability of the Liapunov function is applied to determine the global stability. The following theorem illustrates the conditions of global stability.
Theorem 2
The Eq. (5) is globally stable in region (Omega) if following conditions are hold
$$m_{2} < frac{{2delta_{1}^{2} L_{1} }}{alpha sigma },$$
(10)
$$m_{3} < min left{ {frac{eta rd}{{2theta Llambda^{2} }},frac{dsigma }{{4beta^{2} }}} right}.$$
(11)
Proof
Consider the following positive function:
$$V = frac{1}{2}left( {C – C^{ * } } right)^{2} + m_{1} left( {N – N^{ * } – N^{ * } ln frac{N}{{N^{ * } }}} right) + m_{2} left( {V – V^{ * } – V^{ * } ln frac{V}{{V^{ * } }}} right) + frac{{m_{3} }}{2}left( {G – G^{ * } } right)^{2} ,$$
(12)
where (m_{1}), (m_{2}) and (m_{3}) are positive constants. Equation (5) is globally stable if (frac{dV}{{dt}} < 0) at all equilibrium points. Therefore, the derivative of Eq. (11) is calculated as
$$frac{dV}{{dt}} = left( {C – C^{ * } } right)frac{dC}{{dt}} + m_{1} frac{{left( {N – N^{ * } } right)}}{N}frac{dN}{{dt}} + m_{2} frac{{left( {V – V^{ * } } right)}}{V}frac{dV}{{dt}} + m_{3} left( {G – G^{ * } } right)frac{dG}{{dt}}$$
(13)
Using Eqs. (1–4), we obtain
$$begin{aligned} & frac{dV}{{dt}} = left( {C – C^{ * } } right)left[ {Q + delta_{1} V + eta N – sigma C} right] + m_{1} left( {N – N^{ * } } right)left[ {rleft( {1 – frac{N}{L}} right) – theta C} right] \ & quad + ,m_{2} left( {V – V^{ * } } right)left[ {alpha left( {1 – frac{V}{{L_{1} }}} right) – omega } right] + m_{3} left( {G – G^{ * } } right)left[ {A + beta C + lambda N – dG} right]. \ end{aligned}$$
(14)
Equation (14) is rewritten when the condition for finding the equilibrium point (S^{*} left( {C^{*} ,N^{*} ,V^{*} ,G^{*} } right)) for Eq. (5) is
$$begin{aligned} frac{dV}{{dt}} & = left( {C – C^{ * } } right)left[ {Q + delta_{1} V + eta N – sigma C – left{ {Q + delta_{1} V^{ * } + eta N^{ * } – sigma C^{ * } } right}} right] \ & quad + ,m_{1} left( {N – N^{ * } } right)left[ {rleft( {1 – frac{N}{L}} right) – theta C – left{ {rleft( {1 – frac{{N^{ * } }}{L}} right) – theta C^{ * } } right}} right]. \ & quad + ,m_{2} left( {V – V^{ * } } right)left[ {alpha left( {1 – frac{V}{{L_{1} }}} right) – omega – left{ {alpha left( {1 – frac{{V^{ * } }}{{L_{1} }}} right) – omega } right}} right] \ & quad + ,m_{3} left( {G – G^{ * } } right)left[ {A + beta C – dG + lambda N – left{ {A + beta C^{ * } – dG^{ * } + lambda N^{ * } } right}} right]. \ end{aligned}$$
(15)
Rewriting Eq. (15) as
$$begin{aligned} frac{dV}{{dt}} & = left( {C – C^{ * } } right)left[ {delta_{1} left( {V – V^{ * } } right) + eta left( {N – N^{ * } } right) – sigma left( {C – C^{ * } } right)} right] \ & quad + ,m_{1} left( {N – N^{ * } } right)left[ {frac{ – r}{L}left( {N – N^{ * } } right) – theta left( {C – C^{ * } } right)} right] + m_{2} left( {V – V^{ * } } right)left[ {frac{ – alpha }{{L_{1} }}left( {V – V^{ * } } right)} right] \ & quad + ,m_{3} left( {G – G^{ * } } right)left[ {beta left( {C – C^{ * } } right) – dleft( {G – G^{ * } } right) + lambda left( {N – N^{ * } } right)} right], \ end{aligned}$$
(16)
$$begin{aligned} frac{dV}{{dt}} & = delta_{1} left( {V – V^{ * } } right)left( {C – C^{ * } } right) + left( {eta – m_{1} theta } right)left( {N – N^{ * } } right)left( {C – C^{ * } } right) – sigma left( {C – C^{ * } } right)^{2} \ & quad – frac{{m_{1} r}}{L}left( {N – N^{ * } } right)^{2} – frac{{m_{2} alpha }}{{L_{1} }}left( {V – V^{ * } } right)^{2} + m_{3} beta left( {G – G^{ * } } right)left( {C – C^{ * } } right) \ & quad + ,m_{3} lambda left( {G – G^{ * } } right)left( {N – N^{ * } } right) – m_{3} dleft( {G – G^{ * } } right)^{2} . \ end{aligned}$$
(17)
Choosing (m_{1} = frac{eta }{theta }), Eq. (17) becomes
$$begin{aligned} frac{dV}{{dt}} & = – frac{sigma }{2}left( {C – C^{ * } } right)^{2} + m_{3} beta left( {G – G^{ * } } right)left( {C – C^{ * } } right) – frac{{m_{3} d}}{2}left( {G – G^{ * } } right)^{2} \ & quad – frac{sigma }{2}left( {C – C^{ * } } right)^{2} + delta_{1} left( {V – V^{ * } } right)left( {C – C^{ * } } right) – frac{{m_{2} alpha }}{{L_{1} }}left( {V – V^{ * } } right)^{2} \ & quad – frac{eta r}{{theta L}}left( {N – N^{ * } } right)^{2} + m_{3} lambda left( {G – G^{ * } } right)left( {N – N^{ * } } right) – frac{{m_{3} d}}{2}left( {G – G^{ * } } right)^{2} . \ end{aligned}$$
(18)
Note that (a_{1} x^{2} + a_{2} xy + a_{3} y^{2}) is negatively defined if (a_{1} < 0) and (a_{2}^{2} < 4a_{1} a_{3}). Using this condition, (frac{dV}{{dt}}) is negatively defined within the region of attraction (Omega) when the provided conditions in Eqs. (10) and (11) for (m_{2}) and (m_{3}) hold.
Continue Reading
-
1.
-
Integrating data-driven and physics-based approaches for robust wind power prediction: A comprehensive ML-PINN-Simulink framework
This section provides a comprehensive examination of the methodologies employed in forecasting wind energy generation using the most effective machine learning models, such as Random Forest and XGBoost. The discussion encompasses data preprocessing, feature selection, model training, validation, and performance evaluation. Additionally, it addresses the modeling of various components of the WECS within MATLAB Simulink, including wind turbine dynamics, power electronics, and control strategies, to facilitate a thorough system analysis.
Machine learning framework
Figure 1 illustrates a framework for predicting wind power utilizing machine learning models. Initially, wind power data undergoes pre-processing and is subsequently divided into training and testing datasets. The Random Forest and XGBoost models are developed using the training data. These trained models are then employed to forecast power output by evaluating the input variables from the testing dataset. The performance of the models is assessed by comparing the predicted and actual power outputs, using metrics such as R-squared, Root Mean Square Error (RMSE), and Mean Absolute Error (MAE).
Fig. 1 Workflow diagram for machine learning models.
The wind speed data from the MERRA-2 dataset constitutes the primary input feature for the ML models. An exploratory data analysis was performed to identify patterns and relationships within the data, and missing values were addressed to maintain data integrity. To enhance prediction accuracy, additional derived features, such as rolling averages and wind speed fluctuations, were engineered43. Preprocessing is a critical step in ensuring that raw wind power data is clean, consistent, and suitable for training machine learning models, involving several intricate processes. Missing values, which may arise from sensor failures, connectivity issues, or other data collection problems, were addressed using various techniques. This included mean or median imputation, which replaces missing values with the average or median of the corresponding variable, and interpolation methods such as linear or polynomial interpolation, which estimate missing values based on surrounding data patterns. Advanced machine learning models were also employed to predict missing values using features from other datasets. In instances where imputation was impractical, rows or attributes with a high percentage of missing data were removed.
Outlier detection and removal were performed to mitigate the impact of data points that significantly deviated from typical values, as such anomalies can distort model training. Techniques such as the Z-score method, interquartile range (IQR), and machine learning-based anomaly detection were employed. Since wind speed is measured in meters per second and often appears alongside variables with differing scales, normalization (scaling values between 0 and 1) or standardization (scaling to a mean of 0 and standard deviation of 1) was used to ensure that all features contributed equally during model training. Feature engineering was applied to enhance model performance by creating new variables that more accurately capture the underlying relationships in the data. For instance, wind direction and speed were combined to create vector representations, shifting medians and rolling averages were calculated to track temporal trends, and wind speeds were categorized into three classes: low, medium, and high. To ensure that only the most relevant variables were included, feature selection and dimensionality reduction techniques such as Principal Component Analysis (PCA), mutual information, and correlation analysis were utilized. In situations where data imbalance was detected (e.g., an abundance of instances of specific wind conditions), resampling strategies were implemented. Oversampling involved duplicating samples from the minority class, while under-sampling reduced the number of samples from the majority class to balance the dataset. Categorical variables such as wind turbine types were converted into numerical representations using label encoding or one-hot encoding.
For time-series data, maintaining temporal consistency was essential. This required proper alignment across time intervals and the use of consistent timestamps to preserve temporal integrity. Each of these preprocessing steps contributed to preparing high-quality, well-structured input data, enabling machine learning models to effectively identify patterns and improve forecasting performance22. Hyperparameter tuning played a pivotal role in optimizing the performance of the machine learning models. These hyperparameters, which are not learned during model training, govern model complexity and the learning process. Examples include the number of neighbours in K-Nearest Neighbours and the depth of decision trees in ensemble methods. In this study, hyperparameter optimization was performed using a combination of grid search and manual tuning based on cross-validation results. This process significantly improved model performance, particularly for ensemble models such as Random Forest and XGBoost. Careful adjustment of parameters such as n_estimators, max_depth, and learning_rate enabled these models to capture complex, non-linear patterns in wind energy data without overfitting. XGBoost achieved the best performance, with a testing Mean Absolute Error (MAE) of 0.035 and an R2 of 0.997. The optimized Random Forest model obtained a testing MAE of 0.027 and an R2 of 0.995. These results underscore the importance of fine-tuning key hyperparameters to achieve a balance between generalization and model complexity, particularly in dynamic renewable energy forecasting applications.
Figure 2 illustrates a correlation heatmap that highlights the relationships between various variables. Notably, there is a strong positive correlation between wind speed and electricity generation (r ≈ 0.97), indicating that increased wind speeds are associated with higher electricity production. Conversely, temporal variables such as month and date exhibit weaker correlations with wind speed and power output, suggesting that seasonal factors exert a limited influence on energy generation. The Comprehensive Preprocessing Summary of the dataset is:
-
Initial dataset: 8,760 rows × 4 columns
-
Final dataset: 8,694 rows × 23 columns
-
Rows removed: 66
-
Columns added: 17
-
Missing values imputed: 0
-
Outliers detected: 66
-
Outliers removed: 66
-
Processing steps performed:
-
Added 17 temporal and cyclical features
-
Detected 66 outliers, removed 66
-
Data transformation metrics:
-
Row change: −0.75%
-
Column change: + 475.00%
-
Data completeness: 100.00%
Fig. 2 Correlation analysis of the attributes in the wind data set.
Table 1 presents a summary of the optimized hyperparameters for various machine learning models and a PINN. Each ML model is meticulously fine-tuned using fivefold cross-validation to ensure robust generalization and to mitigate overfitting to specific temporal or stochastic variations in the wind data. This cross-validation strategy facilitates the evaluation of models on multiple subsets of the dataset, thereby enhancing the reliability of performance estimates. Ensemble models, such as Random Forest, Gradient Boosting, and XGBoost, are configured with multiple estimators and regularization parameters. The tuning process for these models emphasizes capturing complex, non-linear interactions between wind features while maintaining generalization across validation folds. Simpler models, such as Linear Regression, serve as benchmarks and are evaluated using the same cross-validation protocol for consistency. The K-Nearest Neighbours model is tuned with distance-based weighting and validated across folds to more accurately model localized patterns in wind behaviour. The Neural Network architecture, comprising multiple hidden layers and employing adaptive optimization via the Adam solver, is trained and validated using Stratified K-Fold cross-validation to address non-uniform data distribution and potential class imbalance.
Table 1 Hyperparameters of Machine Learning Models. For the Physics-Informed Neural Network (PINN), cross-validation is adapted through a domain-specific split of the dataset into:
-
A data subdomain for supervised loss,
-
A physics-informed subdomain composed of collocation points enforcing PDE residuals, and
A validation subset is used to tune key hyperparameters such as the number of layers, neurons per layer, learning rate, and physics-constrained weighting. This structured validation ensures the PINN not only fits observational data but also adheres to the underlying physical laws governing wind energy generation. Collectively, the incorporation of cross-validation across all models strengthens the predictive rigor and ensures the hyperparameter choices yield models that generalize well to unseen wind energy data scenarios.
Data splitting
To ensure that the training set contains enough information for the model to learn patterns and that the testing set is hidden during training, the pre-processed data is randomly divided into 80% training and 20% testing subsets. This allows for an accurate assessment of the model’s generalization and predictive performance.ance44.
Figure 3 illustrates the data distribution used in the model development process. As shown, 80% of the dataset (in green) is allocated for training, while the remaining 20% (in yellow) is reserved for testing. This commonly adopted 80/20 split ensures that the model is trained on a substantial portion of the data while preserving a representative set for reliable performance evaluation and validation.
Fig. 3 Data distribution diagram.
Evaluation
The trained models use input variables from the testing dataset to predict wind power output (Predicted Power)44. The actual power measured from the testing dataset is compared to the expected power outputs. The evaluation metrics are,
Mean absolute error (MAE):
Calculates the average size of forecast mistakes without taking direction into account.
$${varvec{M}}{varvec{A}}{varvec{E}}=frac{1}{{varvec{n}}}sum_{{varvec{i}}=1}^{{varvec{n}}}|{{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{k}}}|$$
(1)
where, ({y}_{i}) is the actual value, ({y}_{k}) is the predicted value, and n is the number of samples.
Mean Squared Error (MSE):
Increases the penalty for greater mistakes by squaring them.
$${varvec{M}}{varvec{S}}{varvec{E}}=frac{1}{{varvec{n}}}sum_{{varvec{i}}=1}^{{varvec{n}}}{({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{k}}})}^{2}$$
(2)
Root Mean Squared Error (RMSE):
Stands for the average squared difference between the expected and actual numbers, squared as a root.
$${varvec{R}}{varvec{M}}{varvec{S}}{varvec{E}}=sqrt{frac{1}{{varvec{n}}}sum_{{varvec{i}}=1}^{{varvec{n}}}{({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{k}}})}^{2}}$$
(3)
R-Squared (Coefficient of Determination):
Shows how well the model accounts for the variation in the data.
$${{varvec{R}}}^{2}=1-frac{frac{1}{{varvec{n}}}sum_{{varvec{i}}=1}^{{varvec{n}}}{left({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{k}}}right)}^{2}}{frac{1}{{varvec{n}}}sum_{{varvec{i}}=1}^{{varvec{n}}}{left({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}^{|}right)}^{2}}$$
(4)
where, ({y}^{|}) is the mean of the actual values.
In order to provide accurate wind power forecasts, this iterative process makes sure the selected model is optimized, opening the door for ongoing developments in renewable energy forecasting and optimization43.
Mathematical modelling of stacking ensemble (RF + XGB)
Let,
X= [X1, X2,……,Xn] ∈ ℝn×d be the input feature matrix.
y = [y1, y2,…….,yn] ∈ ℝn be the target values.
fRF is the function learned by the Random Forest model.
fXGB is the function learned by the Xtreme Gradient Boosting model.
fmeta is the meta-learner, trained on the outputs of RF and XGB.
Base learners Predictions:
For each sample Xi:
$$y_{i}^{{widehat{{}}(1)}} = f_{RF} (X_{i} )$$
(5)
$$y_{i}^{{widehat{{}}(2)}} = f_{RF} (X_{i} )$$
(6)
Meta-Feature Construction:
$$Z_{i} = [y_{i}^{{widehat{{}}(1)}} ,y_{i}^{{widehat{{}}(2)}} ]$$
(7)
Let Z = [Z1,….., Zn] ∈ ℝn×2
Train Meta-Learner:
The meta-learner is trained on (Z, y):
$$y_{i}^{{widehat{{}}}} = f_{meta} (Z_{i} ) = f_{meta} ([y_{i}^{{widehat{{}}(1)}} ,y_{i}^{{widehat{{}}(2)}} ])$$
(8)
fmeta is the Ridge Regression model
Wind energy simulation framework
The wind energy simulation framework, illustrated in Fig. 4, represents a comprehensive computational model that integrates all critical components of a wind power generation system to analyze and optimize energy conversion from wind resources to the electrical grid. The framework begins with aerodynamic modeling of wind turbine interactions, incorporating dynamic pitch control algorithms that continuously adjust blade angles to maximize energy capture while maintaining safety constraints across varying wind conditions. The simulation encompasses drivetrain mechanics, including gearbox efficiency, rotational dynamics, and mechanical losses during the speed conversion process from slow-rotating turbine shafts to high-speed generator inputs. The electrical conversion modeling focuses on the Permanent Magnet Synchronous Generator (PMSG) and power electronics, simulating electromagnetic interactions, voltage regulation, frequency control, and power conditioning circuits necessary for grid-compatible output. Finally, the framework incorporates grid connection modeling, which evaluates system interactions with the broader electrical network, including load balancing, power flow analysis, and considerations for grid stability. This holistic simulation approach enables engineers to predict system performance under diverse operating conditions, optimize component design and control strategies, and ensure reliable integration of renewable wind energy into existing electrical infrastructure while maintaining power quality standards and grid stability requirements.
Fig. 4 Wind Power Generation System: From Turbine to Grid—A Complete Energy Conversion and Transmission Flow Diagram.
Aerodynamic wind turbine model
The power equation to calculate the mechanical power output from wind turbine
$${mathbf{P}}_{mathbf{m}}=0.5times {varvec{uprho}}times mathbf{A}times {mathbf{C}}_{mathbf{p}}({varvec{uplambda}},{varvec{upbeta}})times {mathbf{V}}^{3}$$
(9)
where,
V- Wind speed in m/s,
Cp– Power coefficient as a function of tip-speed ratio (λ) and pitch angle (β),
ρ– Air density in kg/m3,
A-The swept area of the wind turbine blades,
Pm– Output mechanical power28.
Figure 5 illustrates the relationship between turbine output power and turbine speed across a range of wind velocities, from 5 m/s to 11.4 m/s. It shows that power generation peaks at various turbine speeds, with the base wind speed of 9 m/s achieving the maximum power. Higher wind speeds result in greater power output, but they also exhibit a more pronounced decline in efficiency as turbine speeds increase.
Fig. 5 Wind Turbine Power Characteristics at Various Wind Speeds with Fixed Pitch Angle (β = 0°).
Pitch control system
Pitch control in wind turbines adjusts the angle of the blades (pitch angle) to regulate power output by minimizing mechanical stress or stopping the turbine at high wind speeds to prevent damage, while also increasing aerodynamic efficiency at low to moderate wind speeds.
-
o
At rated wind speed or higher, increase the blade pitch angle (β) to reduce Cp and limit power output.
-
o
Control law (PID control recommended)
$${varvec{upbeta}}={mathbf{K}}_{mathbf{p}}times left({mathbf{P}}_{mathbf{e}mathbf{r}mathbf{r}mathbf{o}mathbf{r}}right)+{mathbf{K}}_{mathbf{i}}times int {mathbf{P}}_{mathbf{e}mathbf{r}mathbf{r}mathbf{o}mathbf{r}}mathbf{d}mathbf{t}+{mathbf{K}}_{mathbf{d}}times frac{{mathbf{d}mathbf{P}}_{mathbf{e}mathbf{r}mathbf{r}mathbf{o}mathbf{r}}}{mathbf{d}mathbf{t}}$$
(10)
Where, ({{varvec{P}}}_{{varvec{e}}{varvec{r}}{varvec{r}}{varvec{o}}{varvec{r}}}=boldsymbol{ }{{varvec{P}}}_{{varvec{r}}{varvec{a}}{varvec{t}}{varvec{e}}{varvec{d}}}-boldsymbol{ }{{varvec{P}}}_{{varvec{m}}})45
Drivetrain model
The rotor, shaft, gearbox (if applicable), generator, and power electronics are all represented by the drivetrain model of a wind energy conversion system (WECS). Multi-mass dynamic equations are utilized to analyze torque transmission, rotational dynamics, and energy conversion efficiency, thereby optimizing performance, reliability, and fault detection in various drivetrain configurations, including geared, direct-drive, and hybrid systems.
Model the rotor, shaft, and generator dynamics. A two-mass model is commonly used:
$${mathbf{J}}_{mathbf{r}}=frac{{mathbf{d}mathbf{w}}_{mathbf{r}}}{mathbf{d}mathbf{t}}={mathbf{T}}_{mathbf{m}}-{mathbf{T}}_{mathbf{s}}$$
(11)
$${mathbf{J}}_{mathbf{g}}=frac{{mathbf{d}mathbf{w}}_{mathbf{g}}}{mathbf{d}mathbf{t}}={mathbf{T}}_{mathbf{s}}-{mathbf{T}}_{mathbf{e}}$$
(12)
where, Jr, Jg– Moment of inertia for Rotor and generator, ωr, ωg– Angular velocities for Rotor and generator, Tm– Mechanical torque from the turbine, Ts– Shaft torque, Te– Electrical torque generated from the generator46.
Generator model (PMSG)
The electrical and mechanical dynamics of a wind energy conversion system (WECS) are represented by the Permanent Magnet Synchronous Generator (PMSG) model.
Permanent magnets on the rotor improve efficiency and reliability by removing the need for an external excitation system, and the system is controlled by electromagnetic equations in the dq-reference frame, which are expressed as follows:
$${mathbf{V}}_{mathbf{d}}={mathbf{R}}_{mathbf{s}}{mathbf{I}}_{mathbf{d}}+{mathbf{L}}_{mathbf{d}}frac{mathbf{d}{mathbf{I}}_{mathbf{d}}}{mathbf{d}mathbf{t}}-{varvec{upomega}}{mathbf{L}}_{mathbf{q}}{mathbf{I}}_{mathbf{q}}$$
(13)
$${mathbf{V}}_{mathbf{q}}={mathbf{R}}_{mathbf{s}}{mathbf{I}}_{mathbf{q}}+{mathbf{L}}_{mathbf{q}}frac{mathbf{d}{mathbf{I}}_{mathbf{q}}}{mathbf{d}mathbf{t}}+{varvec{upomega}}{mathbf{L}}_{mathbf{d}}{mathbf{I}}_{mathbf{d}}+{varvec{upomega}}{{varvec{uplambda}}}_{mathbf{m}}$$
(14)
$${mathbf{T}}_{mathbf{e}}=frac{3}{2}mathbf{P}({{varvec{uplambda}}}_{mathbf{m}}{mathbf{I}}_{mathbf{q}}+({mathbf{L}}_{mathbf{d}}-{mathbf{L}}_{mathbf{q}}){mathbf{I}}_{mathbf{d}}{mathbf{I}}_{mathbf{q}})$$
(15)
where, Vd, Vq– stator voltages, Id, Iq– stator currents, Rs– stator resistance, Ld, Lq– stator inductances, λm– flux linkage from permanent magnets, ω- electrical angular velocity, Te– electromagnetic torque, P- number of pole pairs. This model supports control algorithms that maximize wind energy conversion while preserving grid stability and dynamic performance, such as Field-Oriented Control (FOC) and Maximum Power Point Tracking (MPPT)47. Table 2 shows the values of the blocks used in building the wind energy conversion system model in MATLAB Simulink. Figure 6 illustrates a comprehensive wind turbine control system modeled in Simulink, incorporating a Permanent Magnet Synchronous Generator (PMSG) with electromagnetic torque feedback loops and wind speed inputs to optimize power generation performance. The system integrates key components, including pitch angle control, drive train dynamics, generator speed regulation, and the conversion of mechanical energy into a three-phase AC electrical output.
Table 2 Block parameters of the wind energy conversion system. Fig. 6 PMSG-Based Wind Turbine Control System with Drive Train and Pitch Control using predicted wind speed from the ML models.
Physical informed neural network (PINN)
A Physics-Informed Neural Network’s (PINN) core architecture and operational process are depicted in Fig. 7. Physical inputs are received by the neural network, which translates them into output variables. It is composed of an input layer, several hidden layers, and an output layer. In contrast to traditional neural networks, PINNs use a loss function that blends differential equation loss with physical constraint loss to incorporate physical information. This ensures that the anticipated results comply with the governing physical laws, in addition to aligning with the training data. The weights and biases are finalized when the overall loss drops below a predetermined tolerance, which is achieved by continuing the training cycle. By bridging the gap between physics-based systems and data-driven models, this hybrid learning technique makes PINNs ideal for intricate scientific and engineering applications.
Fig. 7 Workflow of a Physics-Informed Neural Network (PINN) for Solving Physical System Constraints.
Mathematical modelling of PINN
To model electricity generation (widehat{P}left(tright)) From wind energy using a neural network that:
-
Learns from real-time measurements
-
Obeys physical constraints (from simulation)
-
Enforces power-wind relationships and turbine characteristics
-
Inputs and features
Let,
v(t)= Wind speed at time t (from real data)
θ(t)= Pitch angle (from simulation)
wr(t)= Rotor speed (from simulation)
Tm(t)= Mechanical torque (from simulation)
Te(t)= Electrical torque (from simulation)
X(t)∈ℝd= Complete feature vector including time-based cyclic features
$$Xleft( t right), = ,[vleft( t right),theta left( t right),w_{r} left( t right), , T_{m} left( t right), , T_{e} left( t right), , sin (frac{2Pi Month}{{12}}), , cos (frac{2Pi Month}{{12}}), ldots ..]$$
(16)
fθ= Neural network with parameters θ
(widehat{P}left(tright))= Predicted electrical power output at time t.
Where,
V- Wind speed in m/s,
Cp– Power coefficient as a function of tip-speed ratio (λ) and pitch angle (β),
ρ– Air density in kg/m3,
A-The swept area of the wind turbine blades,
Pm– Output mechanical power. Using Betz’s limit,
$$P_{expected} left( t right), = ,eta times P_{m} (t)$$
(19)
Typical efficiency= 0.6
-
Constraint: Predicted power must not exceed expected power
$$Lefficiency = frac{1}{N}sumnolimits_{(i = 1)}^{N} {[max (0,hat{P}(t_{i} ) – eta times P_{m} (t_{i} ))]^{2} }$$
(20)
-
Cut-in, rated, and cut-out constraint
Cut- in speed Vin= 3 m/s
Rated speed Vrated= 9 m/s
Cut- out speed Vout= 25 m/s
η(V) = (left{begin{array}{c}0 V< {V}_{in} or V> {V}_{out}\ 1 otherwiseend{array}right.)
Constraints:
$$Lcutoff = frac{1}{N}sumnolimits_{(i = 1)}^{N} {[hat{P}(t_{i} ) times } (1 – eta (V_{i} )]^{2}$$
(21)
From physics and simulation:
$$P_{sim} (t) = w_{r} (t) times T_{e} (t)$$
(22)
So enforce:
$$L_{s} im = frac{1}{N}sumnolimits_{(i = 1)}^{N} {(hat{p}(t_{i} )} – P_{s} im(t_{i} ))^{2}$$
(23)
αi- Tunable weights for each physical constraint.
Dataset configuration for PINN model evaluation
This section outlines the comprehensive dataset strategy employed for PINN model evaluation, which leverages a dual-source data approach to maximize both empirical accuracy and physical consistency. The evaluation framework employs a hybrid data integration strategy to train and validate the PINN model by combining real-world and simulation-based datasets. The real-time MERRA-2 dataset (Case 1) provides globally consistent meteorological inputs, including date, time, wind speed, and electricity generation (in kW). To complement this, a synthetic dataset is generated through MATLAB-based physical simulations (Case 2), where partial differential equations governing wind turbine aerodynamics and electromechanical behavior are used to compute critical operational parameters such as pitch angle, rotor speed, mechanical torque, electrical torque, and generated power. By integrating these two data sources, the PINN model can simultaneously learn from empirical observations and the governing physical laws of wind energy conversion. This approach ensures that predictions are not only data-driven but also physically consistent, effectively capturing the complex, real-world dynamics essential for accurate and reliable wind power forecasting.
Table 3 presents the performance analysis across varying wind speeds, illustrating the characteristic operational behaviour of wind turbines under different atmospheric conditions and revealing distinct performance zones based on wind velocity. At the above-rated wind speed of 14 m/s, the turbine operates with active pitch control (1.072 p.u) to regulate power output, achieving high rotor speed (153.1 rad/s), near-optimal mechanical torque (0.9979 Nm), substantial electrical torque (64.2 Nm), and maximum electricity generation (9788.98 W). The rated wind speed condition of 9 m/s represents optimal operation, where the turbine maintains efficient performance with zero pitch angle adjustment, a good rotor speed (133.3 rad/s), moderate mechanical torque (0.7216 Nm), and a substantial power output (7386.1 W). In contrast, the below-rated wind speed scenario of 6 m/s illustrates sub-optimal operation where the turbine experiences reduced efficiency with minimal mechanical torque (0.2531 Nm), lower rotor speed (95.48 rad/s), and significantly decreased electricity generation (3806.93 W), highlighting the direct correlation between wind speed availability and turbine performance parameters across the operational envelope. The analysis reveals that electricity generation varies dramatically with wind speed, following the cubic relationship P ∝ V3, where power output increases from 3806.93 W at 6 m/s to 9788.98 W at 14 m/s. This demonstrates how wind velocity directly governs energy conversion efficiency, making accurate wind resource assessment critical for reliable renewable energy planning and grid integration.
Table 3 Wind Turbine Performance Parameters at Different Wind Speed Operating Conditions. Continue Reading
-