The State Bank of Pakistan (SBP) injected a staggering Rs1.4 trillion into the financial system on Friday through dual open market operations, using both conventional and Shariah-compliant instruments to manage the huge liquidity demand.
In the conventional reverse repo operation, the SBP injected Rs1.037 trillion against total offers of Rs1.150 trillion. The central bank accepted Rs76 billion at 11.06% for seven-day tenor and Rs962 billion at a cut-off rate of 11.04% for 13-day tenor, with pro rata allotment applied where necessary.
Simultaneously, the SBP conducted a Shariah-compliant Mudarabah-based injection of Rs363 billion, accepting all offered bids – Rs241 billion at 11.14% for seven days and Rs122 billion at 11.13% for 13 days.
Moreover, the Pakistani rupee registered a marginal gain against the US dollar, appreciating 0.01% in the inter-bank market. At close, the local currency settled at 281.90, up two paisa compared with the previous day’s close at 281.92. This marked the rupee’s 11th consecutive session of gains against the greenback.
According to Ismail Iqbal Securities, the rupee has now appreciated 0.66% in the current fiscal year to date, although it remains down 1.19% on a calendar-year-to-date basis.
Analysts at AKD Securities highlighted that the rupee has strengthened for the fifth consecutive week, reflecting improved sentiment amid stability in foreign exchange reserves and remittance inflows.
Meanwhile, gold prices in Pakistan fell, contrary to movements in the international market, where bullion rebounded after comments from US Federal Reserve Chair Jerome Powell fueled expectations of a September rate cut at the Jackson Hole symposium.
According to the All Pakistan Sarafa Gems and Jewellers Association, the price of gold declined Rs1,500 to settle at Rs355,700 per tola, while the rate for 10 grams dropped Rs1,286 to Rs304,955. A day earlier, gold had gained Rs2,000 to close at Rs357,200 per tola.
Internationally, spot gold was up 0.7% at $3,362.53 per ounce by 10:26 am EDT (1426 GMT), while US gold futures were 0.8% lower at $3,408.20, Reuters reported.
Market analysts noted that initially gold traded in a narrow $25-40 range, with little momentum amid lack of progress in the geopolitical situation such as the Russia-Ukraine conflict. However, Powell’s remarks that future data could warrant interest rate cuts triggered renewed buying, lifting prices to the high of $3,380.
Interactive Commodities Director Adnan Agar said that the market remained range bound but faced strong resistance at the $3,400 level. “If gold breaks this barrier, the next target is projected around $3,450,” he said.
Agar added that expectations of a downward shift in US interest rates, combined with persistent geopolitical risks, continue to bolster the metal’s appeal as a safe-haven asset.
With US rates having remained unchanged for an extended period, the possibility of an imminent cut has reinforced bullish sentiment, though traders caution that volatility will likely persist in the short term.
Coordinating laboratory: Teagasc Food Research Centre, Moorepark, Fermoy, Co Cork P61 C996, Ireland.
Participating laboratories:
Laboratory of Food Chemistry and Biochemistry, Department of Food Science and Technology, School of Agriculture, Aristotle University of Thessaloniki, P.O. Box 235, 54124, Thessaloniki, Greece
Global Oatly Science and Innovation Centre, Rydbergs Torg 11, Space Building, Science Village, 22 484 Lund, Sweden
Laboratory of Food Technology, Department of Microbial and Molecular Systems (M2S), KU Leuven, Kasteelpark Arenberg 23, PB 2457, 3001, Leuven, Belgium
INRAE, Institut Agro, STLO, 35042 Rennes, France
School of Biosciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, GU2 7XH, United Kingdom
Nofima AS, Norwegian Institute of Food, Fisheries and Aquaculture Research, PB 210, N-1433, Ås, Norway
Center for Innovative Food (CiFOOD), Department of Food Science, Aarhus University, Agro Food Park 48, Aarhus N 8200, Denmark
Department of Horticulture, Martin-Gatton College of Agriculture, Food and Environment, University of Kentucky, Lexington, Kentucky, USA
Department of Agricultural, Food, Environmental and Animal Sciences, University of Udine, Italy
Wageningen Food & Biobased Research, Wageningen University & Research, 6708 WG Wageningen, The Netherlands
Quadram Institute Bioscience, Rosalind Franklin Road, Norwich Research Park, Norwich, NR4 7UQ, United Kingdom
Department of Food Engineering, Faculty of Engineering, Ege University, 35100, İzmir, Türkiye
Materials
The chemicals and four test products used in the ring study are presented in (Table 3). They were ordered by the coordinating laboratory, aliquoted and shipped to each of the participating laboratories. All laboratories received aliquots from the same batch of each product, with the exception of 3,5-dinitrosalicylic acid (DNSA) which came from two different lots. Prior to shipping, calibration curves established with solutions prepared from both of these lots were compared, and showed nearly equivalent results (Figure S1 in Supplementary material-Section “Protocol implementation at each laboratory”).
Table 3 Products supplied to the laboratories participating in the ring trial.
Equipment needed
The list of equipment required is provided as guidance below.
Preparation of reagents and enzyme solutions
Vortex mixer, pH meter with glass electrode, heating/stirring plate, incubator.
Enzyme assay
Water-bath or thermal shaker (e.g. PCMT Thermoshaker, Grant Instruments, United Kingdom) for enzyme–substrate incubations at 37 °C. Boiling bath (e.g. SBB Aqua 5 Plus, Grant Instruments, United Kingdom) or thermal shaker (e.g. PCMT Thermoshaker, Grant Instruments, United Kingdom) suitable for use at 100 °C. Spectrophotometer (e.g. Shimadzu UV-1800 Spectrophotometer, Shimadzu Corporation, Japan) or plate reader (e.g. BMG Labtech CLARIOstar Plus, BMG Labtech, Germany).
Basic materials
Volumetric flasks, heatproof bottle with lid (e.g. Duran bottle), magnetic stirrer, timer, thermocouple, safe lock microtubes (2 or 1.5 mL), heat (and water) resistant pen or labels for the microtubes, disposable standard cuvette or disposable polystyrene 96-well plate.
Preparation of reagents and enzymes
20 mM Sodium phosphate buffer (with 6.7 mM sodium chloride, pH 6.9 ± 0.3)
Prepare a stock solution by dissolving 1.22 g NaH2PO4 (anhydrous form), 1.38 g Na2HPO4 (anhydrous form) and 0.39 g NaCl in 90 mL purified water and make up the volume to 100 mL. Before use, dilute 10 mL of stock solution to 95 mL with purified water. Confirm that the pH of the buffer, when heated to 37 °C, is within the required working range (pH 6.9 ± 0.3). If needed, adjust the pH, using 1 M NaOH or HCl as required, before making up the volume to 100 mL.
Maltose calibrators
Prepare a 2% (w/v) maltose stock solution in phosphate buffer. Prepare a calibrator series by diluting the maltose stock solution in phosphate buffer as indicated in Table S2 (Supplementary Material – Section “Protocol implementation at each laboratory”). Store in the fridge (or freezer if not for use during the same day).
Colour reagent (96 mM DNSA with 1.06 M sodium potassium tartrate)
Dissolve 1.10 g of DNSA in 80 mL of 0.50 M NaOH at 70 °C in a glass beaker or bottle (partly covered to limit evaporation) on a pre-heated heat/stir plate with continuous stirring and temperature monitoring (e.g. using a thermocouple). Once the DNSA is fully dissolved, add 30 g of sodium potassium tartrate and continue stirring until it dissolves. Remove from heat and wait until the solution cools to room temperature. Bring to 100 mL with purified water. Store at room temperature protected from light for up to 6 months. If precipitation occurs during storage, re-heat to 45 °C while stirring on a heat-stir plate.
Starch solution
Potato starch pre-gelatinized in sodium phosphate buffer (1.0% w/v) is used as substrate. Pre-heat a heat-stir plate (setting it to 250 °C—300 °C is suggested) and pre-heat an incubator (or water bath) to 37 °C. Weigh 250 mg of potato starch into a heatproof bottle and add 750 μL of ethanol (80% v/v). Stir on a vortex mixer to wet all the starch powder (this is a critical step for the complete solubilisation of the starch). Add 20 mL of sodium phosphate buffer and mix again using a vortex mixer making sure that the powder is fully dispersed and there are no lumps in the solution. Cover the bottle with the lid to minimize evaporation (but making sure it is loose enough to let out excess steam) and place on the pre-heated heat-stir plate stirring at 180 rpm. When the solution starts bubbling, start the timer and boil on the heat-stir plate stirring continuously for exactly 15 min. Cool in the incubator/water bath for 15 min (or until it is safe to handle). Make up the volume of the starch solution to 25 mL in a volumetric flask by adding purified water. Store the solution in a closed bottle in an incubator (or water bath set to 37 °C) and use within 2 h. If the starch solution does not clarify significantly a new solution needs to be prepared, as this may indicate poor solubilisation and or gelatinization of the starch. Prepare a fresh solution each time as storing or freezing can cause starch retrogradation and influence the results of the assay.
α-amylase solutions
The preparation of the enzyme solutions is a critical step. Solutions prepared from enzyme powders should be carefully prepared following the same protocol each time to ensure adequate powder hydration and dispersion. After weighing the enzyme powder and adding the adequate amount of sodium phosphate buffer, stock solutions should be stirred in an ice bath (at around 250 rpm) for 20 min before any further dilutions (Graphical protocol in Fig. 6 and Picture S1 in the Supplementary Material). Subsequent dilution(s) of the stock solution(s) should be performed using sodium phosphate buffer to reach the recommended enzyme concentration of 1.0 ± 0.2 U/mL. For the four products tested in the ring trial, recommended concentrations are provided as reference in Table S7 (Supplementary material). For enzyme preparations, it is recommended to start from a stock solution prepared by adding 20 – 100 mg of enzyme powder to 25 mL of sodium phosphate buffer. For human saliva, a stock solution can be prepared by mixing 80 µL of saliva with 920 µL of buffer.
Fig. 6
Schematic overview of the enzyme assay. Created in BioRender.com.
Each enzyme should be tested at three different concentrations prepared by diluting 0.65 mL, 1.00 and 1.50 mL of enzyme stock solution with 1.35, 1.00 and 0.50 mL of buffer, respectively (Table S3). These diluted enzyme solutions are referred to as solutions C1, C2 and C3. Enzyme solutions should always be kept on ice and used within 30 min of preparation.
Enzymatic assay
An overview of the enzyme assay is presented in (Fig. 6).
Preparative procedures
Before starting, the following preparations are recommended: set the heating-block (water bath) as required to ensure 37 °C inside the microtubes (see troubleshooting advice, Table 2); pre-warm the starch solution to 37 °C; prepare a polystyrene container with ice.
Sample collection tubes
For each incubation that will be carried out, label and pre-fill four microtubes with 75 μL of DNSA colour reagent.
Incubations
Set three microtubes (one for each diluted enzyme solution C1, C2 and C3) in the preheated thermal shaker and let the temperature equilibrate before adding 500 µL of pre-warmed potato starch solution to each tube (maintain the tubes closed until the enzyme is added to prevent evaporation). Add 500 µL of diluted enzyme solution C1, C2 and C3 to the corresponding tubes at regular intervals. It is recommended to start the timer immediately when the α-amylase solution is added to the first tube and leave a 30 s interval before each subsequent addition.
Sample collection
Take a 150 μL aliquot from each tube after 3, 6, 9 and 12 min of incubation (respecting the order and intervals at which the incubations were initiated) and transfer it immediately to the corresponding sample collection tube pre-filled with DNSA to stop the reaction. Each aliquot should be taken as closely as possible to its respective sample collection time, within a maximum of ± 5 s.
Absorbance measurements
Prepare the maltose calibrators by mixing 150 µL of each maltose calibrator with 75 µL of DNSA reagent. Centrifuge the samples and calibrators (1000 g, 2 min) so that all droplets are brought back into solution. Place the samples and calibrators in the thermal shaker (or boiling bath) (100 °C, 15 min) and then transfer them to an icebox to cool for 15 min. Add 675 µL of purified water to each tube and mix by inversion. Transfer the samples and calibrators to a cuvette or pipette to a microtiter plate (300 µL per well) and record the absorbance at 540 nm (A540nm).
Ring trial organization
Preliminary testing
Throughout the protocol optimization phase, the assay was repeated multiple times by the coordinating laboratory to define practical aspects. Each of the four test products has been assayed at different concentrations. The final test concentrations were defined by choosing a test concentration that allowed for an adequate distribution of the endpoint measure (spectrophotometry absorbance) and communicated to the participating laboratories.
Protocol transference
A detailed written protocol (Supplementary material) was transferred to each participating laboratory including the recommendations for concentrations of the test products. All laboratories were invited to an online training session that included a video of the assay followed by a Q&A session to clarify any doubts. All labs carried out the assay and reported their results on a standard Excel file between May and November 2023.
Incubation temperatures
All laboratories tested the four enzyme preparations at 37 °C as described above. A subgroup of five laboratories also repeated the assays at 20 °C with the purpose of trying to establish a correlation between the results obtained at both temperatures.
For incubations at 20 °C protocol adaptations were performed as follows. A different recipe was used to prepare the 200 mM sodium phosphate buffer stock solution. It consisted of 1.26 g NaH2PO4, 1.29 g Na2HPO4 and 0.39 g NaCl. The dilutions (10 mL stock diluted to 95 mL with purified water) and pH (6.9 ± 0.3) were the same as those for the buffer used at 37 °C. All reagents and solutions requiring the use of buffer were freshly prepared using this buffer recipe. The recommended concentrations of the α-amylase stock solutions were adjusted to ensure that enough enzymatic activity was present.
Calculations
Calibration curve
The A540nm of the colour reagent blank was subtracted from the readings of all maltose calibrators and their concentration (mg/mL) was plotted against the corresponding ΔA540nm. For reference purposes, using a 96 well plate, the absorbance at 540 nm should increase linearly from approximately 0.05 (for the colour reagent blank) to 1.5 for the highest maltose concentration. The calibration blank should not be included as a data point in the calibration curve.
Enzyme activity definition
The definition of α-amylase activity resulting from the application of the newly developed protocol is the following:
Based on the definition originally proposed by Bernfeld: one unit liberates 1.0 mg of maltose equivalents from potato starch in 3 min at pH 6.9 at 37 °C.
Based on the international enzyme unit definition standards: one unit liberates 1.0 μmol of maltose equivalents from potato starch in 1 min at pH 6.9 at 37 °C.
Amylase activity units based on the definition originally proposed by Bernfeld were multiplied by the conversion factor 0.97 to convert the result into IU.
Enzyme activity calculation
The first step was to subtract A540nm of the colour reagent blank from all readings. The calibration curve was then used to calculate the maltose concentrations (mg/mL) reached with each diluted enzyme solution (C1, C2 and C3) at each sampling point during incubations. Enzyme concentrations during incubations were then calculated as mg/mL for enzyme powders, or µL/mL for liquid (saliva) samples.
For each diluted enzyme solution (C1, C2 or C3), maltose concentrations (mg/mL) were plotted against time (tmin) and the corresponding linear regression was established to determine the reaction kinetics’ slope ((text{m}t{text{min}})). For each enzyme concentration, units of enzyme were calculated using the following equation.
$$Activity (U per mg or mu L of enzyme product)= 3mintimes frac{text{m}t{text{min}}(frac{maltose concentration (frac{mg}{mL})}{time (min)})}{Enzyme concentration left(frac{mg}{mL} orfrac{mu L}{mL}right)}$$
A template Excel file is provided for calculations in the Supplementary Material.
Statistical analysis and assessment of method’s performance
Data visualization and statistical analyses have been performed in R (version 4.3.2)29. The packages ggplot230 and ggdist31 have been used in the preparation of the plots presented in the manuscript.
Outlier analysis was conducted on non-transformed data to preserve the original variability and scale of the datasets. First, Cochran’s test (outliers package in R32) was used to assess intralaboratory variability and did not reveal any outliers. Subsequently, for interlaboratory comparisons, boxplot analysis, Bias Z-scores and Grubbs’ test32 were employed complementarily. The results reported by one lab for three test products (pancreatin, α-amylase M and α-amylase S) assayed at 37 °C were more than 1.5 interquartile ranges below the 25th or above the 75th percentiles, consistent with unsatisfactory Bias Z-scores (|z|> 3). Grubb’s test confirmed these as outliers and they have been excluded from the statistical analysis. All results in the 20 °C dataset fell within 1.5 interquartile ranges of the 25th and 75th percentiles (Fig. 5), consistent with satisfactory Bias Z-scores (|z|< 2) (Supplementary Figure S4). While Grubbs’ test identified two potential outliers (Lab A for pancreatin and Lab D for α-amylase M), this outcome was considered less reliable due to the small sample size (n = 5) and lack of corroboration from boxplot and Bias Z-score analyses, and so these results were retained.
Statistical analysis of the dataset resulting from the implementation of the protocol at 37 °C has been carried out to investigate the effects of the tested products, concentrations and incubation conditions (thermal shaker vs. water bath with or without shaking) as well as the two-way and three-way interactions between these factors. Normality of this dataset has been confirmed through the Shapiro–Wilk test (p > 0.05). The homogeneity of variances, as assessed using Levene’s test in the Rstatix package version 0.7.233, was not confirmed (p < 0.001). Due to the limited availability of suitable non-parametric alternatives, a logarithm transformation was performed on this data set enabling homogenisation of the variances and application of a three-way ANOVA (Rstatix package). Statistically significant effects were further examined using Pairwise T-Test comparisons, applying Bonferroni adjustments for multiple comparisons as required. The results obtained when implementing the protocol at 20 °C were normally distributed, but homogeneity of variances was not confirmed for this dataset either. The corresponding logarithm transformed data frame did not conform to normality, hence the Kruskal–Wallis test was applied to examine the significance of the differences between the four products, followed by the Bonferroni-corrected Wilcoxon test for pairwise comparisons (all tests performed using the Rstatix package). Statistically significant effects have been accepted at the 95% level.
For each laboratory and product, an individual ratio of α-amylase activity at 37 °C to 20 °C was calculated, and the mean of these ratios across all laboratories was determined for each product. The 95% confidence interval for this mean ratio was computed using the t-distribution. Normal distribution and homogeneity of variances have been confirmed for this dataset, hence one-way ANOVA was used to investigate whether the ratios obtained for each product were significantly different.
For a thorough understanding of the method’s reliability, precision, and transferability across different laboratory settings three complementary metrics have been used: Z-scores based on bias scores for a standardized evaluation of systematic errors, repeatability and reproducibility.
Z-scores were calculated to standardize the comparison of bias scores across laboratories and products enabling to assess the overall agreement between individual laboratory results and the mean for each product. For each product, bias scores were first calculated for each laboratory using the mean of all laboratories as the reference value and then converted to z-scores:
$$text{z }=frac{left( x -text{ X}right)}{text{SD}}$$
x is the individual laboratory result, X is the mean of all laboratories, and SD is the standard deviation. Z-scores interpretation followed standard criteria with |z|≤ 2 as satisfactory, 2 <|z|< 3 as questionable, and |z|≥ 3 as potentially unsatisfactory.
Repeatability (measured as intralaboratory coefficient of variation, CVr), which quantifies method precision within each laboratory, reflecting consistency under identical conditions, was calculated as the root mean square of the individual laboratory’s CVs:
CVr is the coefficient of variation under repeatability conditions (intralaboratory); (i) indexes each laboratory, ({CV}_{i}) is the coefficient of variation for laboratory (i); L is the number of participating laboratories.
Reproducibility (measured as coefficient of variation, CVR), a measurement of method’s consistency across different laboratories indicates its robustness to varying environments and operators, was calculated for each tested product as:
$${CV}_{R}=frac{SD}{X} times 100$$
CVR is the coefficient of variation under reproducibility conditions (interlaboratory); SD and X correspond to the standard deviation and mean values calculated from interlaboratory data.
Coefficients of variation below 30%15,16 are frequently considered to be indicators of small intra- and interlaboratory variability. In some cases, critical thresholds for repeatability (intralaboratory CV) are set at 20%34.
Every time a driver puts 10 litres of fuel in their car, they’re paying about $5 in tax that goes to the federal government.
That is, of course, unless they drive an electric vehicle. No petrol or diesel being bought means the government loses that 51c per litre.
Over recent years, the Productivity Commission has been calling for reforms on how revenue is raised from vehicles, and the Albanese government has been making noises that a road user charge could be put on EVs.
After the government’s economic reform roundtable this week, the treasurer, Jim Chalmers, said there was “a lot of conceptual support for road user charging” but said the details – including the type of charge and what vehicles would be included – were still to be determined..
State treasurers will discuss the concept when they meet next month.
According to the Electric Vehicle Council, about 298,000 battery electric vehicles and 81,000 plug-in hybrids have been sold in Australia so far. While that number is going up – EV sales were 13% of new car sales in the last quarter, the highest proportion on record – they still make up less than 2% of the 21.7m vehicles on the road.
“We’re still in the early adopter phase on EVs and well behind the adoption rates of other advanced economies,” says Prof Matt Burke, a transport expert at Griffith University and an EV owner.
He says there is “widespread agreement” and a “coalition of the willing” among policy experts and automobile clubs that a road user charge is coming.
But what it looks like is still up for debate. And some are worried that bringing a charge in too early could stymie the uptake of EVs.
Burke says governments could decide on a flat fee for EV users, or a fee related to how far vehicles drive that could also include an allowance for the weight of the vehicle.
“Electric vehicles don’t pollute in the same way as other vehicles, but they are a little bit heavier and that deteriorates the road surface a bit more,” he says.
Where the excise goes
But what is the fuel excise actually used for?
“People think that fuel excise pays for roads, but it doesn’t,” says Alison Reeve, director of the energy and climate program at the Grattan Institute.
According to the Parliamentary Budget Office, less than 6% of the fuel excise the government collects goes into a special account for states and territories to spend on road infrastructure.
Fuel excise is not what is known as a “hypothecated tax” – that is, a tax where the spending of the revenue is directed at a particular issue, like roads.
“[Fuel excise] hasn’t been hypothecated since 1992. But people still think it’s how it works,” says Reeve. “Some of that revenue might go to roads and some of it might go for new carpets in Parliament House.”
Falling revenue
The government’s revenue from fuel excise is definitely falling, but it’s not really because of EVs.
From a peak in the 1980s, the amount of fuel excise as a percentage of government revenue has fallen from about 11% to 4%, thanks largely to improved fuel efficiency.
The average passenger vehicle in Australia burned 11.3 litres of fuel per 100 kilometres in 2005. Now it burns only 6.9 litres, and with new efficiency standards in place for new vehicles, that number is likely to drop further.
skip past newsletter promotion
after newsletter promotion
Modern petrol cars generally burn less fuel per kilometre than older models, reducing fuel excise revenue. Photograph: Joel Carrett/AAP
A future dominated by electric cars would see a dramatic drop in the fuel excise. Right now, the government is left with about $15bn from the tax after it has given full rebates to fuel that was burned off roads – mostly on mining sites – and partial rebates for heavy vehicles.
There would be good arguments that the amount of tax collected for driving a vehicle should reflect its cost to society, such as the health impacts of cars that run on fossil fuels and the thousands of premature deaths each year caused by breathing in particulates. There’s also the accumulating damage to the climate from the release of greenhouse gases.
But Reeve says the amount of excise charged by the government “is just a number that the government thinks it can get away with”.
Right policy, wrong time?
The Electric Vehicle Council’s head of legal, policy and advocacy, Aman Gaur, says a road user charge “is going to be a reality” but it needs to be paid by all vehicles, not just EVs.
The council represents car companies making and selling electric cars and says EV drivers should be exempt from a charge until 30% of new vehicle sales are electric.
“Just hitting EV drivers will be counterproductive,” says Gaur. “We support a charge for all types of vehicles. We don’t want to see a model that slams the breaks on our transition to a cleaner transport economy.”
The New South Wales government has proposed its own road user charge which, if there is no federal scheme, could be introduced in mid-2027. The proposed fee would be 2.97 cents per kilometre for an EV and 2.37c for a plug-in hybrid.
An EV driver doing 10,000km would expect to pay $300 under that proposed scheme. New cars sold in Australia average 6.9 litres of fuel use per 100km. A driver of an average new petrol car doing the same distance pays about $352 in fuel excise.
“Paying $300 might not be the end of the world for an EV enthusiast, but we need to think about the average person looking to buy an EV. For those people, that $300 might be the difference [of them buying the car],” Gaur says.
But there are other advantages to a general road user charge, Burke points out. It can give a government the ability to give discounts for some people, such as pensioners or the unemployed, or incorporate congestion charging.
A big question will be whether revenue from a road user charge would go into the government’s general coffers or if it would be ringfenced. The Australian Automobile Association has suggested revenue from road user charges on EVs should go into transport infrastructure, including building more recharging stations.
Helen Rowe, the transport lead at the Climateworks Centre, says: “If designed well, [a road user charge] could do far more than just plug a revenue gap.
“It could help cut congestion, reduce emissions, lower infrastructure costs and improve the overall efficiency of Australia’s transport network.”
On 22 August 2025, Ørsted’s subsidiary Revolution Wind LLC, a 50/50 joint venture with Global Infrastructure Partner’s Skyborn Renewables, received an order instructing the project to stop activities on the outer continental shelf related to the Revolution Wind project from the US Department of the Interior’s Bureau of Ocean Energy Management (BOEM). Revolution Wind is complying with the order and is taking appropriate steps to stop offshore activities, ensuring the safety of workers and the environment.
The project commenced offshore construction following the final federal approval from BOEM last year. The project is 80% complete with all offshore foundations installed and 45 out of 65 wind turbines installed.
Ørsted is evaluating all options to resolve the matter expeditiously. This includes engagement with relevant permitting agencies for any necessary clarification or resolution as well as through potential legal proceedings, with the aim being to proceed with continued project construction towards COD in the second half of 2026.
Revolution Wind is fully permitted, having secured all required federal and state permits including its Construction and Operations Plan approval letter on 17 November 2023 following reviews that began more than nine years ago. Revolution Wind has 20-year power purchase agreements to deliver 400 MW of electricity to Rhode Island and 304 MW to Connecticut, enough to power over 350,000 homes across both states to meet their growing energy demand. As a reference, South Fork Wind, which is adjacent to Revolution Wind and uses the same turbine technology, delivered reliable energy to New York at a capacity factor of 53% for the first half of 2025, on par with the state’s baseload power sources.
Ørsted is investing into American energy generation, grid upgrades, port infrastructure, and a supply chain, including US shipbuilding and manufacturing extending to more than 40 states. Revolution Wind is already employing hundreds of local union workers supporting both on and offshore construction activities. Ørsted’s US offshore wind projects have totalled approximately 4 million labour union hours to date, 2 million of which are with Revolution Wind.
Ørsted is evaluating the potential financial implications of this development, considering a range of scenarios, including legal proceedings. Ørsted will, in due course, advise the market on the potential impact of the order on the plan announced on 11 August 2025 (company announcement 12/2025) to conduct a rights issue. Existing shareholders and prospective investors are advised to await further announcements by the company.
Global Media Relations Tom Christiansen +45 99 55 60 17 tomlc@orsted.com
About Ørsted The Ørsted vision is a world that runs entirely on green energy. Ørsted develops, constructs, and operates offshore and onshore wind farms, solar farms, energy storage facilities, and bioenergy plants. Ørsted is recognised on the CDP Climate Change A List as a global leader on climate action and was the first energy company in the world to have its science-based net-zero emissions target validated by the Science Based Targets initiative (SBTi). Headquartered in Denmark, Ørsted employs approx. 8,300 people. Ørsted’s shares are listed on Nasdaq Copenhagen (Orsted). In 2024, the group’s revenue was DKK 71.0 billion (EUR 9.5 billion). Visit orsted.com or follow us.
Trump has repeatedly attacked the Fed’s chair, Jerome Powell, for not cutting its short-term interest rate, and even threatened to fire him. Powell, who will speak Friday at an economic symposium in Jackson Hole, Wyoming, says the Fed wants to see how the economy responds to Trump’s sweeping tariffs on imports, which Powell says could push up inflation.
Powell’s caution has infuriated Trump, who has demanded the Fed cut borrowing costs to spur the economy and reduce the interest rates the federal government pays on its debt. Trump has also accused Powell of mismanaging the U.S. central bank’s $2.5 billion building renovation project.
Firing the Fed chair or forcing out a governor would threaten the Fed’s venerated independence, which has long been supported by most economists and Wall Street investors. Here’s what to know about the Fed:
Why the Fed’s independence matters
The Fed wields extensive power over the U.S. economy. By cutting the short-term interest rate it controls — which it typically does when the economy falters — the Fed can make borrowing cheaper and encourage more spending, accelerating growth and hiring. When it raises the rate — which it does to cool the economy and combat inflation — it can weaken the economy and cause job losses.
Economists have long preferred independent central banks because they can more easily take unpopular steps to fight inflation, such as raise interest rates, which makes borrowing to buy a home, car, or appliance more expensive.
The importance of an independent Fed was cemented for most economists after the extended inflation spike of the 1970s and early 1980s. Former Fed Chair Arthur Burns has been widely blamed for allowing the painful inflation of that era to accelerate by succumbing to pressure from President Richard Nixon to keep rates low heading into the 1972 election. Nixon feared higher rates would cost him the election, which he won in a landslide.
Paul Volcker was eventually appointed chair of the Fed in 1979 by President Jimmy Carter, and he pushed the Fed’s short-term rate to the stunningly high level of nearly 20%. (It is currently 4.3%). The eye-popping rates triggered a sharp recession, pushed unemployment to nearly 11%, and spurred widespread protests.
Yet Volcker didn’t flinch. By the mid-1980s, inflation had fallen back into the low single digits. Volcker’s willingness to inflict pain on the economy to throttle inflation is seen by most economists as a key example of the value of an independent Fed.
Investors are watching closely
An effort to fire Powell would almost certainly cause stock prices to fall and bond yields to spike higher, pushing up interest rates on government debt and raising borrowing costs for mortgages, auto loans, and credit card debt. The interest rate on the 10-year Treasury is a benchmark for mortgage rates.
Most investors prefer an independent Fed, partly because it typically manages inflation better without being influenced by politics but also because its decisions are more predictable. Fed officials often publicly discuss how they would alter interest rate policies if economic conditions changed.
If the Fed was more swayed by politics, it would be harder for financial markets to anticipate — or understand — its decisions.
The Fed’s independence doesn’t mean it’s unaccountable
Fed chairs like Powell are appointed by the president to serve four-year terms, and have to be confirmed by the Senate. The president also appoints the six other members of the Fed’s governing board, who can serve staggered terms of up to 14 years.
Those appointments can allow a president over time to significantly alter the Fed’s policies. Former president Joe Biden appointed four of the current seven members: Powell, Cook, Philip Jefferson, and Michael Barr. A fifth Biden appointee, Adriana Kugler, stepped down unexpectedly on Aug. 1, about five months before the end of her term. Trump has already nominated his top economist, Stephen Miran, as a potential replacement, though he will require Senate approval. Cook’s term ends in 2038, so forcing her out would allow Trump to appoint a loyalist sooner.
President Donald Trump visits the Federal Reserve during renovations, July 24, 2025, in Washington. (AP Photo/Julia Demaree Nikhinson)
Trump will be able to replace Powell as Fed chair in May 2026, when Powell’s term expires. Yet 12 members of the Fed’s interest-rate setting committee have a vote on whether to raise or lower interest rates, so even replacing the Chair doesn’t guarantee that Fed policy will shift the way Trump wants.
Congress, meanwhile, can set the Fed’s goals through legislation. In 1977, for example, Congress gave the Fed a “dual mandate” to keep prices stable and seek maximum employment. The Fed defines stable prices as inflation at 2%.
The 1977 law also requires the Fed chair to testify before the House and Senate twice every year about the economy and interest rate policy.
Could the president fire Powell before his term ends?
The Supreme Court earlier this year suggested in a ruling on other independent agencies that a president can’t fire the chair of the Fed just because he doesn’t like the chair’s policy choices. But he may be able to remove him “for cause,” typically interpreted to mean some kind of wrongdoing or negligence.
It’s a likely reason the Trump administration has zeroed in on the building renovation, in hopes it could provide a “for cause” pretext. Still, Powell would likely fight any attempt to remove him, and the case could wind up at the Supreme Court.
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Meta will license technology from artificial intelligence image and video generation start-up Midjourney, as the social media group shifts towards working with third parties as its struggle to keep pace with rivals.
Alexandr Wang, Meta’s new chief AI officer, said in a post on X on Friday that the company planned to license Midjourney’s “aesthetic technology for our future models and products, bringing beauty to billions” in a “technical collaboration” between their research teams.
“To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach,” he added. “This means world-class talent, ambitious compute road map, and working with the best players across the industry.”
The tie-up will allow Meta to develop and integrate multimedia AI generation features into its apps, as chief executive Mark Zuckerberg has indicated that he expects AI-generated content to become more prominent on the platform.
The move comes as Zuckerberg pours billions of dollars into developing “superintelligence” that surpasses human intelligence. In recent months he has aggressively poached top AI researchers from competitors, doubled down on his investment in AI infrastructure and acquired AI voice company Play AI. Meta also took a stake in data labelling group Scale AI.
This week, Meta announced it was restructuring its AI group — recently renamed Meta Superintelligence Lab — into four distinct teams, the fourth overhaul in six months as it has struggled to solidify its organisational structure.
The Midjourney partnership marks a shift by Meta away from building all of its AI models and products in house, after its existing ones began to lag rivals.
In 2024 Meta rolled out an image generation tool called Imagine, which allows users to generate images from text prompts. Last October it shared a research paper on a movie generation model, Movie Gen, that will generate and edit videos based on text prompts. Meta promised to integrate it fully into Instagram in 2025.
However, the integration has yet to happen and industry insiders say the model already appears antiquated compared with Google’s Veo 3 and OpenAI’s Sora models, which have been released to consumers.
The social media company had also abandoned plans to publicly release its flagship Behemoth large language model, according to people familiar with the matter, focusing instead on building new models.
Meta had started using third-party models internally for tasks such as coding, according to multiple people familiar with the matter, as faith in its Llama models has waned.
San Francisco-based Midjourney, founded in 2021 by David Holz, has become one of the most popular image generation companies, despite its chief executive refusing venture capital and instead opting to self fund. In June, it released its video model V1, which allows users to generate a short video from an existing image.
Holz said in a post on X on Friday that “bringing sublime tools of creation and beauty to billions of people is squarely within our mission”, adding that Midjourney remained an “independent, community-backed research lab, with no investors”.
The ‘desk’ of a 33-year-old developer in Starbucks
In the affluent Seoul neighbourhood of Daechi, Hyun Sung-joo has a dilemma.
His cafe is sometimes visited by Cagongjok, a term for mostly young South Koreans who love to study or work at cafes, but there’s a limit.
He says one customer recently set up a workspace in his cafe which included two laptops and a six-port power strip to charge all their devices – for an entire day.
“I ended up blocking off the power outlets,” the cafe owner of 15 years tells the BBC.
“With Daechi’s high rents, it’s difficult to run a cafe if someone occupies a seat all day.”
The cultural phenomenon of Cagongjok is rampant in South Korea, especially in areas with large numbers of students and office workers. They dominate cafes often on a much greater scale than other Western countries like the UK, where those studying are often surrounded by others there to socialise.
And Starbucks Korea warned this month that a minority of people are going further than just laptops, such as bringing in desktop monitors, printers, partitioning off desks or leaving tables unattended for long periods.
The chain has now launched nationwide guidelines aimed at curbing “a small number of extreme cases” where elaborate setups or prolonged empty seats disrupt other customers.
Starbucks said staff would not ask customers to leave, but rather provide “guidance” when needed. It also cited previous cases of theft when customers left belongings unattended, calling the new guidelines “a step toward a more comfortable store environment”.
Students often set up study areas in South Korean cafes
It doesn’t seem to be deterring the more moderate Cagongjok though, for whom Starbucks has been somewhat of a haven in recent years and continues to be.
On a Thursday evening in Seoul’s Gangnam district, a Starbucks branch buzzes quietly with customers studying, heads down over laptops and books.
Among them is an 18-year-old student who dropped out of school and is preparing for the university entrance exam, “Suneung”.
“I get here around 11am and stay until 10pm,” she tells the BBC. “Sometimes I leave my things and go eat nearby.”
We have seen no bulky equipment during our visits to Starbucks since the new guidelines were issued on 7 August, though we did see one man with a laptop stand, keyboard and mouse. Some customers still appear to be leaving their seats unattended for long periods, with laptops and books spread across tables.
When asked whether its new restrictions have led to visible changes, Starbucks Korea told the BBC it was “difficult to confirm”.
Some students set up their belongings and then left a Starbucks seen here in Suwon
Reactions to Starbucks’ move have been mixed. Most welcome the policy as a long-overdue step toward restoring normalcy in how cafes are used.
This is particularly so among those who visit Starbucks for relaxation or conversation, who say it has become difficult to find seats because of Cagongjok, and that the hushed atmosphere often made them feel self-conscious about talking freely.
A few have criticised it as overreach, saying the chain has abandoned its previously hands-off approach.
It reflects a wider public discussion in South Korea over Cagongjok that has been brewing ever since it started taking off in 2010, coinciding with the growth of franchised coffee chains in the country. That has kept growing, with the country seeing a 48% increase in coffee shops over the past five years, according to the National Tax Service, nearing 100,000.
Some 70% of people in a recent survey of more than 2,000 Gen Z job seekers in South Korea by recruitment platform Jinhaksa Catch said they studied in cafes at least once a week.
‘Two people would take up enough space for 10 customers’
Dealing with “seat hogging” and related issues is a tricky balance, and the independent cafes grappling with a similar thing have deployed a range of approaches.
While Hyun has experienced customers bringing multiple electronic devices and setting up workstations, he says extreme cases like this are rare.
“It’s maybe two or three people out of a hundred,” he said. “Most people are considerate. Some even order another drink if they stay long, and I’m totally fine with that.”
Hyun’s cafe, which locals also use as a space for conversation or private tutoring, still welcomes Cagongjok as long as they respect the shared space.
Some other cafe franchises even cater to them with power outlets, individual desks and longer stay allowances.
Cafe owner Hyun Sung-joo isn’t against Cagongjok but finds some customers take it too far
But other cafe owners have taken stricter steps. Kim, a café owner in Jeonju who asked the BBC to remain anonymous, introduced a “No Study Zone” policy after repeated complaints about space being monopolised.
“Two people would come in and take over space for 10. Sometimes they’d leave for meals and come back to study for seven or eight hours,” he says. “We eventually put up a sign saying this is a space for conversation, not for studying.”
Now his cafe allows a maximum of two hours for those using it to study or work. The rule does not apply to regular customers who are simply having coffee.
“I made the policy to prevent potential conflicts between customers,” Kim says.
‘Cagongjok’ – here to stay?
Yu-jin Mo feels more comfortable in cafes than in libraries
So what’s behind the trend and why do so many in South Korea feel the need to work or study in cafes rather than in libraries, shared workspaces or at home?
For some, the cafe is more than just an ambient space; it’s a place to feel grounded.
Yu-jin Mo, 29, tells the BBC about her experience growing up in foster care. “Home wasn’t a safe place. I lived with my father in a small container, and sometimes he’d lock the door from the outside and leave me alone inside.”
Even now, as an adult, she finds it hard to be alone. “As soon as I wake up, I go to a cafe. I tried libraries and study cafes, but they felt suffocating,” she says.
Later Ms Mo even ran her own cafe for a year, hoping to offer a space where people like her could feel comfortable staying and studying.
Professor Choi Ra-young of Ansan University, who has studied lifelong education for over two decades, sees Cagongjok as a cultural phenomenon shaped by South Korea’s hyper-competitive society.
“This is a youth culture created by the society we’ve built,” she tells the BBC. “Most Cagongjok are likely job seekers or students. They’re under pressure – whether it’s from academics, job insecurity or housing conditions with no windows and no space to study.
“In a way, these young people are victims of a system that doesn’t provide enough public space for them to work or learn,” she adds. “They might be seen as a nuisance, but they’re also a product of social structure.”
Professor Choi said it was time to create more inclusive spaces. “We need guidelines and environments that allow for cafe studying – without disturbing others – if we want to accommodate this culture realistically.”
Synthetic data can reliably mirror real-world data (RWD) in chronic spontaneous urticaria (CSU), potentially enabling smaller clinical trial sample sizes without compromising statistical power, a recent study found.1 The findings, published in Clinical and Translational Allergy, highlight a significant challenge in CSU research—the ongoing difficulty of enrolling and retaining adequate patient numbers, especially among those with comorbidities, older age, or uncommon disease subtypes.
The results of the study show that synthetic data could maintain accuracy down to 25% of the original real-world data sample size. | Image credit: tippapatt – stock.adobe.com
The authors noted, “Robust data are essential for clinical and epidemiological research, yet in chronic spontaneous urticaria (CSU), certain patient groups, such as the elderly or comorbid patients, are often underrepresented. In clinical trials, strict inclusion and exclusion criteria frequently limit recruitment, making it difficult to achieve sufficient statistical power. Similarly, real-world observational studies may lack sufficient sample sizes for robust analysis.”
Using data from the Chronic Urticaria Registry (CURE), researchers extracted information on 4136 patients across 30 countries and 12 ethnicities, capturing a comprehensive set of demographic, laboratory, and patient-reported outcome variables. Synthetic datasets were generated using a Classification and Regression Trees (CART) algorithm, which allows the synthetic cohorts to “maintain the statistical properties and correlations of the original data without directly copying any individual records.”
This method preserves patient privacy while still capturing clinical and demographic diversity.
When systematically compared with real-world data, the synthetic datasets showed strong alignment across key measures. In terms of gender, RWD reported 72.4% female (n = 2994) vs 71.7% (n = 2965) in synthetic data, with no significant difference (P = .47). Age distributions were virtually identical: mean (SD) 44.2 (16.3) years in RWD compared with 44.3 (16.4) years in synthetic data (P = .85). Body mass index was similarly replicated (26.3 vs 26.1; P = .28).
Clinical characteristics were also successfully replicated. Daily wheals were reported by 28.6% of real patients compared with 28.8% of synthetic patients, while angioedema was absent in 24.3% of RWD patients, which was matched by 23.7% in synthetic data. Comorbidity burden was nearly identical, with mean comorbidities of 1.98 in RWD and 1.96 in synthetic datasets (P = .77). Atopic dermatitis prevalence was 4.8% in both groups, and allergic rhinitis occurred in 19.1% and 19.2% of patients, respectively (P = .98). Similarly, comorbidity burden, laboratory parameters such as IgE, and medication use showed no significant differences, reinforcing that synthetic datasets can reliably capture diverse clinical characteristics.
Further subgroup analyses, including patients aged ≥60 years, those with BMI ≥25 or ≥30, and both male and female cohorts, displayed no statistically significant differences in core characteristics and disease scores when comparing synthetic and real-world data. (all P > .10). Correlation analyses further validated synthetic fidelity. The strong negative correlation between UAS7 and UCT seen in RWD (P = -.73) was reproduced in synthetic data (P = -.72; P = .58).
The results of the study show that synthetic data could maintain accuracy down to 25% of the original RWD sample size. The authors explained, “enrolling just 38 patients in a clinical trial and applying GenDT allows us to generate a synthetic cohort of 150 patients. In other words, we can produce a synthetic patient population that is 4 times larger while maintaining high-quality data.” Previous technologies, such as Unlearn.AI, have only achieved a 33% reduction in control arm size, whereas this approach offers a 75% lower sample size for both control and treatment arms with equivalent statistical power.2
However, researchers caution that synthetic data performed best with continuous variables such as age, BMI, and patient-reported outcome scores, but categorical variables, including treatment type or symptom frequency, were more prone to errors when generated from smaller sample sizes.1 “Further research is necessary to establish and validate the standards of this method, allowing the scientific community to benefit from its advantages and safely use it in research settings,” the authors note.
These findings suggest that synthetic data generation, when vigorously validated, could ease barriers in CSU research, especially for understudied populations such as older adults, those with comorbidities, and patients with rare disease variants. By extending smaller cohorts into adequately powered synthetic populations, researchers may accelerate hypothesis testing, enhance subgroup analyses, and reduce the costs and burdens associated with recruitment.
References
1. Gutsche A, Salameh P, Jahandideh SS, et al. Can Synthetic Data Allow for Smaller Sample Sizes in Chronic Urticaria Research? Clin Transl Allergy. 2025;15(8):e70087. doi: 10.1002/clt2.70087
2. Yakubu A, Bogert J, Zhuang R, et al. Accelerating randomized clinical trials in Alzheimer’s Disease using generative machine learning model forecasts of progression. Alzheimers Dement. Published online January 9, 2025. doi:org/10.1002/alz.086486