Category: 3. Business

  • Saudi Arabia's massive wealth fund sees $8 billion writedown in megaprojects – MSN

    1. Saudi Arabia’s massive wealth fund sees $8 billion writedown in megaprojects  MSN
    2. As if firing hundreds of staffers weren’t enough, Saudi Crown Prince MBS’s gigaproject Neom now faces an even harsher reality check: an $8 billion write-off.  Luxurylaunches
    3. Saudi’s PIF takes $8 billion writedown on megaprojects  Semafor
    4. PIF’s strong financial position fuels Kingdom’s economic transformation  Arab News PK
    5. Sovereign Fund Posts Lower Book Value of Saudi Gigaprojects in FY24 Report  MarketScreener

    Continue Reading

  • Puppy fat jabs: are our pets next in line for weight-loss drugs? | Pets

    Puppy fat jabs: are our pets next in line for weight-loss drugs? | Pets

    Where humans lead, their dogs tend to follow – now it seems that might even apply to weight-loss wonder drugs.

    Medications such as Wegovy have become ubiquitous among people hoping to shed pounds quickly. But businesses keen to cash in on the science behind the weight-loss jabs are now investigating other applications for the drugs, and our four-legged friends could be the next in line for a slimming solution.

    The active ingredients in the drugs mimic a hormone called GLP-1, which makes people want to eat less. One biotech firm has just announced trials for an implant that reproduces the effect in dogs, with the aim of bringing it to the market as soon as 2028.

    The hope is that the same science can be used to quell the voracious appetite of some dog breeds that can lead to them piling on the pounds.

    While experts say such medications could be beneficial for some overweight animals, their use outside of humans is not without complication or the potential for controversy.

    What is not contentious is that pet weight is a real issue for many owners.

    Neutering, age, a lack of activity and overfeeding are among the factors that can contribute to the problem. According to a 2024 report by the trade body UK Pet Food, 50% of dogs and 43% of cats are overweight.

    Excess weight can shorten the lifespan of pets and reduce their quality of life; tubby cats, for instance, face a greater risk of problems including diabetes, urinary tract issues and cancer, while overweight canines are more likely to have to contend with conditions such as arthritis, heart disease, breathing problems and cancer.

    Commonly recommended solutions are increased exercise and strict prescription diets that are high in fibre and protein but low in calories.

    Dr Eleanor Raffan, a veterinary surgeon and expert in canine genetics and obesity at the University of Cambridge, said some good old-fashioned discipline should be the first option.

    ‘You have to really resist the big brown eye treatment and that can be really hard in our busy lives today.’ Photograph: Angela Hampton Picture Library/Alamy

    “I would [advise] owners, both for the benefit of their pockets, and possibly for the benefit of their pets, to try modifying their dog’s diet and exercise regime first, because I think we know that that can be safe and effective if done well,” she said.

    “But if that fails, or if there’s an urgent need to get weight loss, then I see no reason why using [GLP-1 mimic] drugs shouldn’t be a reasonable option, so long as they are tested in proper, prospective, well-designed, randomised clinical trials before being widely offered in practice.”

    A strong selling point of the medication is that it helps pet owners navigate one of the biggest obstacles to pet weight loss: what many vets describe as “pester power”, or, to put it another way, humans’ inability to say no to their loyal companions.

    “What our research shows in our group … is that that if you have a very foodie dog, you have to work much harder,” she said. “You have to really resist the big brown eye treatment and that can be really hard in our busy lives today.”

    Appetite suppressants may help stop the kind of begging that most pet owners are familiar with, but they come with one major drawback: that a pet’s appetite is often an important marker of their health. Some experts worry that if humans are unable to tell if their animal has stopped eating because they are unwell or because the weight loss drugs are doing their job, it could prove dangerous.

    “If cats stop eating for a few days, they can develop a condition called hepatic lipidosis and other problems, which can be life threatening,” said Raffan.

    Michael Klotsman is the chief executive of Okava, one of the companies developing a long-acting implant called OKV-119 that contains a GLP-1 mimic called exenatide.

    He said behavioural changes from OKV-119 were quite different from illness-related appetite loss.

    “What owners should expect to see is their pet eating appropriate portions without the previous food obsession – they’ll still eat regularly and show interest in meals, just without the excessive begging, scavenging or gulping behaviour,” he said.

    The company is planning trials in dogs, and hopes to launch its implant commercially in 2028 or 2029.

    Klotsman said: “OKV-119 represents an additional tool for veterinarians treating pets where conventional approaches have been insufficient, similar to how GLP-1 therapies have provided new hope for human patients struggling with obesity despite their best efforts with diet and exercise.”

    Prof Peter Sandøe, of the University of Copenhagen and the director of the Danish Centre for the Study of Companion Animal Welfare, said such drugs could potentially help some pets, such as food-obsessed dogs.

    However, he added, if owners were concerned enough about their pet’s weight to consider such medications, then there were many other – probably cheaper – options they could try, from activity feeders to extra walks, microchip-controlled feeders, and switching out treats for fun and games.

    “Why take the medical solution if there’s some other solutions that actually might be better for both human and animal welfare?” he said.

    Continue Reading

  • Trump hiked tariffs on US imports. Now he’s looking at exports – sparking fears of ‘dangerous precedent’ | Trump administration

    Trump hiked tariffs on US imports. Now he’s looking at exports – sparking fears of ‘dangerous precedent’ | Trump administration

    Apple CEO Tim Cook visited the White House bearing an unusual gift. “This box was made in California,” Cook reassured his audience in the Oval Office this month, as he took off the lid.

    Inside was a glass plaque, engraved for its recipient, and a slab for the plaque to sit on. “The base was made in Utah, and is 24-karat gold,” said Cook.

    Donald Trump appeared genuinely touched by the gift.

    But the plaque wasn’t Cook’s only offering: Apple announced that day it would invest another $100bn in US manufacturing.

    The timing appeared to work well for Apple. That day, Trump said Apple would be among the companies that would be exempt from a new US tariff on imported computer chips.

    The Art of the Deal looms large in the White House, where Trump is brokering agreements with powerful tech companies – in the midst of his trade war – that are reminiscent of the real estate transactions that launched him into fame.

    But in recent days, this dealmaking has entered uncharted waters.

    Two days after Cook and Nvidia CEO Jensen Huang had a closed-door meeting with Trump at the White House. The president later announced Nvidia, along with its rival Advanced Micro Devices (AMD), will be allowed to sell certain artificial intelligence chips to Chinese companies – so long as they share 15% of their revenue with the US government.

    It was a dramatic about-face from Trump, who initially blocked the chips’ exports in April. And it swiftly prompted suggestions that Nvidia was buying its way out of simmering tensions between Washington and Beijing.

    Trade experts say such a deal, where a company essentially pays the US government to export a good, could destabilize trading relations. Martin Chorzempa, a senior fellow at the Peterson Institute for International Economics, said that it creates “the perception that export controls are up for sale”.

    “If you create the perception that licenses, which are supposed to be determined on pure national security grounds, are up for sale, you potentially open up room for there to be this wave of lobbying for all sorts of really, dangerous, sensitive technologies,” Chorzempa said. “I think that’s a very dangerous precedent to set.”

    Though the White House announced the deal, it technically hasn’t been rolled out yet, likely because of legal complications. The White House is calling the deal a “revenue-sharing” agreement, but critics point out that it could also be considered a tax on exports, which may not be legal under US laws or the constitution.

    The “legality” of the deal was “still being ironed out by the Department of Commerce”, White House press secretary Karoline Leavitt told reporters this week.

    Nvidia and AMD’s AI chips are at the heart of the technological arms race between the US and China. Nvidia, which became the first publicly traded company to reach a $4tn valuation last month, creates the essential processing chips that are used to run and develop AI.

    The US government has played a role in this arms race over the last several years, setting regulations on what AI chips and manufacturing equipment can be sent to China. If China has less computing power, the country will be slower to develop AI, giving a clear advantage to the US.

    But despite the restrictions, China has been catching up, raising questions on how US policy should move forward.

    “They haven’t held them back as far as the advocates had hoped. The US has an enormous computing advantage over China, but their best models are only a few months behind our best models,” Chorzempa said. For US policymakers, “the question they’ve had to grapple with is: Where do you draw the line?”

    The AI chips Nvidia and AMD can now sell to China aren’t considered high-end. While they can be used for inference on trained models, they aren’t powerful enough to train new AI models. When announcing the deal with Nvidia and AMD, Trump said the chip is “an old chip that China already possesses … under a different label”.

    This is where a major debate on AI policy comes in. Those who take a hardline stance on the US’s relationship with China say that allowing Chinese companies to purchase even an “old chip” could still help the country get an advantage over the US. Others would say a restriction on such chips wouldn’t be meaningful, and could even be counterproductive.

    To balance these two sides, the Trump administration is asking companies to pay up in order to export to China – a solution that people on both sides of the AI debate say is a precarious one.

    “Export controls are a frontline defense in protecting our national security, and we should not set a precedent that incentivizes the government to grant licenses to sell China technology that will enhance AI capabilities,” said John Moolenaar, a Republican US representative from Michigan, in a statement.

    But Trump’s gut-reaction to dealmaking seems focused on the wallet. On Wednesday, US treasury secretary Scott Bessent praised the arrangement and suggested it could be extended to other industries over time. “I think that right now this is unique, but now that we have the model and the beta test, why not expand it?” he told Bloomberg.

    Julia Powles, executive director of the Institute for Technology, Law and Policy at the University of California, Los Angeles, said the deal opens up questions of whether similar pressure can be applied to other tech companies.

    “What other quid pro quo might be asked in the future? The quid pro quo that would be of great concern to the [tech] sector is anything that reduces their reputation for privacy and security,” Powles said. “That’s thinking of government like a transactional operator, not like an institution with rules about when, how and for what it can extract taxes, levies and subsidies.”

    But that seems to be how the White House runs now. When explaining to the press how he made the deal, Trump said he told Huang: “I want 20% if I’m going to approve this for you”.

    “For the country, for our country. I don’t want it myself,” the president added. “And he said, ‘Would you make it 15?’ So we negotiated a little deal.”

    Continue Reading

  • China mandates more domestic AI chips for data centres to cut reliance on Nvidia

    China mandates more domestic AI chips for data centres to cut reliance on Nvidia

    China is requiring its data centres to use more home-grown computing chips in a move that underscores Beijing’s accelerated efforts to cut reliance on foreign technology as the US tightens export controls.

    Publicly owned computing hubs across the country have been asked to source more than 50 per cent of their chips from domestic producers to support the indigenous semiconductor sector, according to people familiar with the matter.

    The mandate finds its origins in guidelines proposed in March last year by the Shanghai municipality, which was among the first in the country to stipulate that “adoption of domestic computing and storage chips at the city’s intelligent computing centres should be above 50 per cent by 2025”.

    The guidelines were part of a policy to strengthen artificial intelligence computing resources in China’s financial hub. The plan was backed by government agencies including branches of the National Development and Reform Commission (NDRC) in the city and the Shanghai Communications Administration, an agency under the Ministry of Industry and Information Technology (MIIT).

    One source, who works as an adviser in the data centre industry, said that earlier this year the Shanghai chip quotas for the city’s intelligent computing centres had become mandatory nationwide policy.

    The guidelines were part of a policy to strengthen AI computing resources in Shanghai. Photo: EPA

    The MIIT and NDRC did not immediately respond to a request for comment on Saturday outside business hours.

    Continue Reading

  • Al Gore’s Investment Firm Sells Amazon Stock, Exits Mastercard, and Buys Visa – Barron's

    1. Al Gore’s Investment Firm Sells Amazon Stock, Exits Mastercard, and Buys Visa  Barron’s
    2. Fisher Asset Management Reduces Stake in Amazon  The Globe and Mail
    3. Al Gore’s Generation Investment Management Reduces Amazon.com Inc by 3.8% in Q2 2025  Yahoo Finance
    4. Duquesne Family Office LLC Reduces Amazon Holdings  TipRanks
    5. Pershing Square Boosts Amazon Stake with 5.8M Shares  TipRanks

    Continue Reading

  • Digital measurement of pupil size, corneal size, and eccentricity in Guinea pigs using python compared with traditional OCT

    Digital measurement of pupil size, corneal size, and eccentricity in Guinea pigs using python compared with traditional OCT

    Ethical approval 

    All experiments in this study were conducted in full compliance with ARRIVE guidelines. Ethical approval was granted by the Laboratory Animal Management and Ethics Committee of the Shanghai Clinical Centre for Public Health (No. 2022-A009-01). All experiments in this study were conducted in full compliance with the relevant provisions of the Institutional Animal Care and Use Committee (IACUC) guidelines.

    Experimental animals

    Fourteen healthy male and female guinea pigs from Danyang City Yichang Laboratory Animal Co. Ltd (SCXK 2021-0002) were housed in a 12 h light/12 h dark cycle. All animals underwent slit-lamp (YZ3, 66 Vision-Tech, China) examinations, and no clinically observable ocular surface diseases were detected. All guinea pigs were euthanized by intraperitoneal injection of an overdose of sodium pentobarbital.

    Experimental procedure

    Every two weeks, pupil size, corneal size, and eccentricity were measured in all guinea pigs until they reached 12 weeks of age. In addition, the left eyes of guinea pigs were exposed to light intensities of 100 lx, 250 lx, and 500 lx, and the pupillary responses of 2-week-old and 12-week-old guinea pigs to different light intensities were observed. All measurements were performed between 9:00 and 11:00 AM. For each guinea pig, only the left eye was measured. During each measurement, ensure that the guinea pig’s pupil and corneal limbus are fully exposed. All measurements were made under a 50 lx lighting condition to minimize the effect of light on pupil size, with a light meter (DLY-1801 C, DELIXI, China) used to measure the illumination level at the guinea pig’s eye level. After completing the in vivo measurements of the guinea pig eyeballs, the guinea pigs were euthanized with sodium pentobarbital, and the eyeballs were carefully removed and the surrounding fascial and muscular tissues were cleared to ensure that the eyeballs were not punctured in a way that would alter their morphology. The eyeballs were then placed at the center of the platform, and a top-down camera (T201, NETUM, China) was used to capture images of the coronal eye contour to measure the biological parameters of the guinea pig eyes in vitro.

    Data measurement

    In previous studies, we developed a method for biometric measurement of experimental animal eyes in vitro, utilizing Python 3.9 programming software27 and a camera. We wrote the code in PyCharm Community Edition 2021.1.3 × 64 (Fig. 2A), and imported the Tkinter module to create a graphical user interface (GUI) window (Fig. 2B). The program was packaged into an executable external file along with an installation module to enable visual human-computer interaction. This program was specifically designed for small animal eye measurements and includes features such as image import, line color and thickness customization, conversion coefficient calculation, temporary recording modules, manual point selection for edge coordinates, and curve fitting modules. Finally, the operator can export the generated images’ scale using a designated button to facilitate the calculation of large datasets.

    The guinea pig was placed on a specially designed fixture, and gentle handling was applied to keep the animal calm. High-definition cameras (13Pro Max, iPhone, US) were used to capture clear and steady images of the guinea pig’s eyes (Fig. 5B). Colorful objects and sounds were used to direct the guinea pig’s gaze to the left or right, and images were taken when the eye movement was most pronounced, ensuring that both the pupil and corneal edges were subjectively identified. A scale with actual distance reference was fixed in the same plane, and both the left and right eyes were photographed. The images were then input into the compiled program, and the matrix plot module was used to open the image (code: image = Image.open(path + pic, ‘r’)). This process ensured that each point in the image corresponds to a two-dimensional coordinate (x, y).

    Fig. 5

    Measurement of guinea pig eye parameters using edge detection, curve fitting and pixel-to-actual distance conversion. (A) Photographs of guinea pigs at specific thresholds by edge detection technique. (B) Non-contact measurement of pupil size and corneal size in guinea pigs using a curve-fitting technique. Yellow circles are fitted guinea pig pupil and corneal contours. (C) Conic curve fitting to measure guinea pig pupil and corneal eccentricity. The yellow ellipse is the fitted pupil and cornea, and the red lines are the long and short axes of the fitted ellipse. (D) By taking repeated segments on the scale and calculating the average, an accurate conversion factor can be obtained. For example, the conversion factor was calculated to be 0.0193 for the red line, 0.0191 for the green line, 0.0192 for the yellow line, and the final conversion factor was taken as the average of the three, 0.0192.

    Pixel-actual distance conversion method

    The captured image was imported into Python programming software. The imported image corresponds to a 2D coordinate system with horizontal and vertical coordinates of the pixel length and width values of the image, with each point having a corresponding pixel-valued coordinate. The coordinates in the two-dimensional image are obtained using the “ginput” function. Using geometric principles (Formula: Lmn=(:sqrt{{(text{m}1-text{m}2)}^{2}+{(text{n}1-text{n}2)}^{2}})), the distance between any two points in the coordinate system, (m1, n1) and (m2, n2), can be calculated. After obtaining the corresponding pixel values for the actual distance in the central region, the actual distance between any two points in the image can be derived through conversion. By repeatedly selecting points on the scale and calculating the average, an accurate conversion factor can be obtained (Fig. 5D).

    Eyeballs edge data acquisition

    The Canny edge detection algorithm was used to obtain a double thresholded image (Fig. 5A). Points with sharp changes in image brightness were organized into a set of curved segments called edges. The non-target edges are removed and only the eye edges are retained. The image is saved. Use the find Contours function to get information about the coordinates of this edge. When edge detection is not effective, coordinates can also be obtained by manually taking points on the target contour. The data is then saved in a.txt file.

    Circle fitting to calculate pupil and corneal size

    Assuming that the center of the pupil or cornea forms a circle, circle fitting is applied (Fig. 5B). By using the conversion factor, the actual diameter of the circle can be obtained, which further allows for the calculation of the pupil and corneal area, as reported in previous studies12.

    Ellipse fitting fit calculates eccentricity

    The pupil and corneal contour resembled an inclined elliptical shape (Fig. 5C). This shape fits well to the conic equation (Ax² + Bxy + Cy² + Dx + Ey + F = 0). Based on the geometry and the least squares principle, the eccentricity of the cornea can then be calculated.

    Measurement of guinea pig pupil and corneal size using the Auto Refractometer

    The guinea pig was tranquilized, kept quiet and fixed in the detection position of the Auto Refractometer (KR-800, Topcon, Japan). Another operator operates the Auto Refractometer and switches the instrument mode to pupil mode. The parameters and position of the instrument were adjusted so that the image of the guinea pig’s eye was centered in the viewfinder frame, and then photographed after the image had reached its clearest state. After taking the picture, the measuring scale in the image is carefully aligned with the edge of the guinea pig’s pupil or cornea, and then the Auto Refractometer will automatically measure the size of the guinea pig’s pupil or cornea.

    Measurement of guinea pig pupil and corneal size using OCT

    The guinea pigs were tranquilized by an operator, kept quiet, and immobilized in the detection position of the OCT device (YG-20 W MAX, TowardPi, China). Ensure that it does not move during the inspection process to maintain the stability of the inspection environment. The other operator operates the OCT device, sets the detection mode to anterior segment mode, and installs the anterior segment adapter. The parameters of the device are adjusted so that the guinea pig eye image is centered and in high definition. Images in which the guinea pig pupil and corneal structures were fully exposed were defined as standard images. After image acquisition, the corneal limbus (corneo-scleral transition zone), the pupil margin, and the geometric center were marked manually using the accompanying analysis software, and the values of each parameter were calculated. To ensure the reliability of the data, each measurement was repeated three times by two independent operators, and the mean values were taken for subsequent analysis.

    Statistical analysis

    The data were imported into Python 3.9 for statistical analysis and graphing. All data in this study, unless otherwise stated, are mean ± standard deviation. ANOVA was used to compare pupil size at different light intensities, and paired-samples t-tests were used to compare differences between Python and OCT measurements. Differences between any two parameters were defined as significant at p < 0.05 and highly significant at p < 0.001.

    Continue Reading

  • Solar farm could supply ‘a quarter of Peterborough’s homes’

    Solar farm could supply ‘a quarter of Peterborough’s homes’

    Plans for a solar farm that would have the capacity to power a quarter of the homes in Peterborough have been submitted.

    FRV TH Powertek wants to install 100,000 solar panels across 80 hectares (about 200 acres) at Malice Farm, near Thorney, near Peterborough.

    If approved, the solar farm would be in place for 40 years. Construction would take nearly a year.

    The application lodged with Peterborough City Council states that the facility would power 22,550 homes.

    A new bridge would be constructed across New South Eau Drain for construction access, said the Local Democracy Reporting Service.

    The plans also state that there would be more than 400 trees and shrubs planting around the site’s boundaries to screen views from residential properties.

    In initial consultations, four residents raised objections, including over the loss of agricultural land.

    FRV said it would address the points that had been raised.

    Continue Reading

  • An independent internet | Special Report

    An independent internet | Special Report

    his Independence Day, NADRA was slated to launch its first dematerialised ID card, designed to operate through a mobile application, eventually replacing the physical CNICs the agency currently issues. At the time of writing this, the dematerialised card has not been launched but it will undoubtedly be seen as part of Pakistan’s march towards a technology-dominated future envisioned under the recently passed Digital Nation Pakistan Act. The sentiment is echoed in Pakistan’s recently finalised National Artificial Intelligence Policy, passed two weeks ago.

    The lofty technological aspirations stand in sharp relief with what many people in the country are experiencing. At the start of this month, citing security concerns, the government announced that all mobile internet services would be suspended in Balochistan till August 31. Blanket internet shutdowns, often imposed without much explanation, are common in the province. This shutdown, juxtaposed with the grandiose statements made in the context of the digital nation, positioning technology as a boon and the state as a benefactor of technological development, is enough to promote scepticism. The visions of a digital nation seem to come with a fine print that excludes large chunks of the population.

    The story of the internet in Pakistan runs alongside the story of Pakistan, marked by inequality and severe restrictions. Civic spaces are shrinking in parallel, online and offline. These are undercut by draconian laws, an expansive surveillance apparatus and violence. Entire social media platforms have been blocked for months and years. Latest monitoring tech is routinely acquired to ratchet up surveillance in the country. Once a burgeoning space for thought and innovation, now anyone posting an opinion online is looking over their shoulder. For all the grandstanding of a “digital nation,” Pakistan has consistently ranked as not free on internet freedoms indices and is backsliding.

    The promise of technology for openness, independence and inclusion for women has been overshadowed by the cost patriarchy imposes for existing online. This year alone, many murders in the name of so-called honour have occurred. Violence is wielded as a disciplinary tool to punish women and gender minorities for their online visibility. This is reflected in the state’s cynical use of outdated notions of morality and decency to police online spaces and ban entire platforms, culminating in an experience of women and gender-diverse people in online spaces and beyond that mirrors their experience elsewhere, marked by moral policing, violence and patriarchal anxiety.

    The cracks in the digital nation are the same as in the nation. Grandiose speeches, narratives and technological feats do little to mask the exclusion and violence. Imposing digital nationhood onto a nation still struggling with getting the basics of nationhood right merely puts a band aid on a festering wound. Unless we address the structural and systemic issues in the country, we cannot march towards a technological future—technology will only reflect these problems; in some cases, it might amplify those.

    Pakistan is not alone in the challenges it faces with technology. We stand at the cusp of major transformations induced by artificial intelligence and its widespread application with few guardrails. We are all ill-prepared for these changes. In a country already teetering under economic pressures, violence, social fissures and existential questions of nationhood, such ill preparedness could mean technology exacerbating these problems.

    However, one must not surrender to despair. The moment requires radically different structures and approaches. We must resist falling into the trappings of narratives sold to us about technologies by big-tech companies, whose vision of a tech utopia has had dystopian consequences, accelerating our march towards climate, economic and societal crises. Pakistan’s new AI policy regurgitates techno-speak borrowed from Silicon Valley, utterly failing to chart out an independent vision of AI that speaks to local values and challenges. The policy sees AI as “a unique opportunity to harness digital disruption by educating an eager young population that can potentially propel the nation onto a growth trajectory to sustain our future national competitiveness and improve the lives of citizens.” It frames technology as a silver bullet, the magic stick that will gloss over decades of structural neglect, and looks at its young population only as a potential workforce without addressing the political, social and economic alienation it currently faces.

    Just as technology alone is not responsible for the litany of problems we face, the current framework of positioning technology as the panacea will not work. As we think about the future of the internet on Independence Day, we must reposition these tools and spaces with true independence, free of the strictures placed by governments through the narrow prism of economic gain and binaries imposed by tech companies blinded by profit margins. Technology, grounded in ideas beyond these narrow viewpoints, can serve the spirit of independence.


    The writer is a researcher and campaigner on human and digital rights issues.

    Continue Reading

  • Intelligent generation and optimization of resources in music teaching reform based on artificial intelligence and deep learning

    Intelligent generation and optimization of resources in music teaching reform based on artificial intelligence and deep learning

    Datasets collection

    To comprehensively verify the effectiveness and universality of the proposed algorithm, this study adopts two large-scale public MIDI datasets. First, the LAKH MIDI v0.1 dataset (https://colinraffel.com/projects/lmd/) is used as the main training data. It is a large-scale dataset containing over 170,000 MIDI files, and its rich melody, harmony, and rhythm materials provide a solid foundation for the model to learn music rules. In the data preprocessing stage, this study filters out piano music MIDI files from the LMD dataset because their melodic structures are clear, making them suitable as basic training materials for melody generation models.

    However, to address the limitation of the LAKH MIDI dataset being dominated by piano music and test the model’s universality across a wider range of instruments, this study introduces a second multi-instrument dataset for supplementary verification. It adopts the MuseScore dataset (https://opendatalab.com/OpenDataLab/MuseScore), which is a large-scale, high-quality dataset collected from the online sheet music community MuseScore. The core advantage of this dataset lies in its great instrumental diversity, covering sheet music from classical orchestral music to modern band instruments (such as guitar, bass, drums) and various solo instruments. This provides an ideal platform for testing the model’s ability to generate non-piano melodies.

    For both datasets, a unified preprocessing workflow is implemented:

    1) Instrument Track Filtering: For the LAKH MIDI dataset, this study uses its metadata to filter out MIDI files with piano as the primary instrument. For the MuseScore dataset, which contains complex ensemble arrangements, it applies heuristic rules to extract melodic tracks: prioritizing tracks in MIDI files where the instrument ID belongs to melodic instruments (such as violin, flute, saxophone, etc.) and which have the largest number of notes and the widest range.

    2) Note Information Extraction: This study uses Python’s mido library to parse each MIDI file. From the filtered tracks, it extracts four core attributes of each note: Pitch (i.e., MIDI note number, 0-127); Velocity (0-127); Start Time (in ticks); and Duration (in ticks).

    3) Time Quantization and Serialization: To standardize rhythm information, it quantizes the start time and duration of notes to a 16th-note precision. This means discretizing the continuous time axis into a grid with 16th notes as the smallest unit, where all note events are aligned to the nearest grid point. All note events are strictly sorted by their quantized start time to form a time sequence.

    4) Feature Engineering and Normalization: To eliminate mode differences, each melody is transposed to C major or a minor, allowing the model to focus on learning relative interval relationships rather than absolute pitches. Finally, each note event is encoded into a numerical vector. A typical vector might include: [normalized pitch, quantized duration, interval time from the previous note]. The sequence formed by these vectors serves as the final input to the model.

    5) Data Splitting: All preprocessed sequence data are strictly divided into training and test sets in an 80%/20% ratio.

    Experimental environment and parameters setting

    To ensure the efficiency of the experiment and the reliability of the results, this paper has carefully designed the experimental environment and parameter settings. The experiment uses a high-performance computing cluster equipped with NVIDIA Tesla V100 GPUs to accelerate the model training process. This GPU has strong parallel computing capabilities and can effectively handle the computational burden brought by large-scale datasets. The model training is implemented based on the TensorFlow 2.0 framework, and Keras is used to construct and optimize the neural network structure. Due to the large scale of the LMD dataset, the training process requires a large amount of time and computational resources. Therefore, multi-GPU parallel computing technology is adopted in the experiment. Through multi-GPU parallel computing, the training time can be significantly shortened, and the experimental efficiency can be improved. In addition, the hyperparameters of the model have been carefully adjusted and optimized in the experiment to ensure that the model can achieve the best performance during the training process. Table 2 displays the parameter settings.

    Table 2 Experimental parameter setting.

    In hyperparameter tuning, a combination of Grid Search and manual fine-tuning based on validation set performance is adopted. The tuning method involved dividing 10% of the training set into a Validation Set, with the selection of all hyperparameters ultimately judged by the model’s F1 score on this validation set.

    Search space and selection reasons for key hyperparameters:

    1) Learning Rate: Searched within the range of [1e-3, 1e-4, 5e-5]. Experiments showed that a learning rate of 1e-3 led to unstable training with severe oscillations in the loss function, while 5e-5 resulted in excessively slow convergence. The final choice of 1e-4 achieved the best balance between convergence speed and stability.

    2) Batch Size: Tested three options: [32, 64, 128]. A batch size of 128, though the fastest in training, showed slightly decreased performance on the validation set, possibly getting stuck in a poor local optimum. A batch size of 64 achieved the optimal balance between computational efficiency and model performance.

    3) Number of LSTM Layers: Tested 1-layer and 2-layer LSTM networks. Results indicated that increasing to 2 layers did not bring significant performance improvement but instead increased computational costs and the risk of overfitting.

    4) Number of Neurons: Tested hidden layer neuron counts in [128, 256, 512]. 256 neurons proved sufficient to capture complex dependencies in melodic sequences, while 512 neurons showed slight signs of overfitting.

    5) Reward Function Weights: Tested weight ratios of artistic/technical aspects in [0.5/0.5, 0.7/0.3, 0.9/0.1]. Through subjective listening evaluation of generated samples, the ratio of 0.7/0.3 was deemed to best balance the melodic pleasantness and technical rationality.

    Performance evaluation

    To evaluate the performance of the constructed model, the proposed AC-MGME model algorithm is compared with the model algorithm proposed by DQN, MuseNet32, DDPG33 and Abouelyazid (2023), and the Accuracy, F1-score, and melody generation time are evaluated. The results in LAKH MIDI v0.1 dataset are shown in Figs. 4, 5 and 6.

    Fig. 4

    Accuracy results for music melody prediction by various algorithms in the LAKH MIDI v0.1 dataset.

    Fig. 5
    figure 5

    F1-score results for music melody prediction by various algorithms in the LAKH MIDI v0.1 dataset.

    In Figs. 4 and 5, it can be found that on the LAKH MIDI dataset, the proposed AC-MGME model algorithm achieves the highest scores in both key indicators: accuracy (95.95%) and F1 score (91.02%). From the perspective of the learning process, although the Transformer-based State-of-the-Art (SOTA) model MuseNet shows strong competitiveness in the early stage of training, the AC-MGME model, relying on its efficient reinforcement learning framework, demonstrated greater optimization potential and successfully surpassed MuseNet in the later stage of training. This not only proves the superiority of its final results but also reflects its excellent learning efficiency. At the same time, AC-MGME maintained a leading position in all stages compared with other reinforcement learning-based comparison models (such as DDPG, DQN, etc.).

    To more rigorously verify whether the leading advantage of the AC-MGME model in accuracy is statistically significant, a two-sample t-test is conducted on the results of each model in the final training epoch (Epoch 100). The significance level (α) adopted is 0.05, that is, when the p-value is less than 0.05, the performance difference between the two models is considered statistically significant, as shown in Table 3.

    Table 3 Results of statistical significance test (t-test) for the final round (Epoch 100) accuracy and F1 score of AC-MGME model and each comparison model on LAKH MIDI v0.1 dataset.

    In Table 3, the test results clearly demonstrate that the performance advantage of the AC-MGME model over all comparison models in terms of the key accuracy indicator is statistically significant. Specifically, even when compared with the powerful benchmark model MuseNet, its p-value (0.021) is far below the 0.05 significance threshold, and the differences from models such as DDPG and DQN are even more pronounced (p < 0.001). This conclusion is further confirmed by the F1 score, which more comprehensively reflects the model’s precision and recall. AC-MGME is also significantly superior to all comparison models in terms of F1 score (all p-values are less than 0.05). Overall, these statistical test results fundamentally rule out the possibility that the observed performance differences are caused by random factors. It provides solid and quantitative statistical evidence for the core assertion that the proposed AC-MGME model exhibits strong performance in both generation accuracy and comprehensive performance.

    Fig. 6
    figure 6

    The comparison result chart of music melody generation time by each algorithm in LAKH MIDI v0.1 dataset.

    Figure 6 illustrates how the developed AC-MGME model outperforms previous contrast models in terms of melody creation time efficiency. From the figure, the generation time of AC-MGME decreases steadily with the progress of training, reaching the lowest value among all models at the 100th epoch, which is only 2.69 s. In sharp contrast, the Transformer-based SOTA model MuseNet maintains an inference time of over 6.2 s, highlighting the limitations of large-scale models in real-time applications. Meanwhile, the efficiency of AC-MGME is also significantly superior to all other reinforcement learning-based comparison models.

    To further verify the superiority of the AC-MGME model in computational efficiency from a statistical perspective, a two-sample t-test is similarly conducted on the melody generation time of each model at the final epoch (Epoch 100), as shown in Table 4.

    Table 4 Results table of statistical significance test (t-test) between AC-MGME model and each comparison model in the final round (Epoch 100) melody generation time.

    In Table 4, in comparisons with all contrast models (including the heavyweight MuseNet and other reinforcement learning models), the p-values are all far less than 0.001. This extremely low p-value indicates that the shorter generation time exhibited by the AC-MGME model is not a random fluctuation in the experiment, but a significant advantage with high statistical significance. This finding provides decisive statistical evidence for the applicability of the model in real-time personalized music teaching applications that require rapid feedback.

    To verify the generalization ability of the AC-MGME model in more complex musical environments, the final accuracy rates on the MuseScore dataset are compared, as shown in Fig. 7.

    Fig. 7
    figure 7

    Accuracy results for music melody prediction by various algorithms in the MuseScore dataset.

    In Fig. 7, due to the significantly greater complexity and diversity of the MuseScore dataset in terms of instrument types and musical styles compared to the LAKH dataset, there is a universal decline in the accuracy of all models, which precisely reflects the challenging nature of this testing task. Nevertheless, the AC-MGME model once again demonstrats its strong learning ability and robustness, topping the list with an accuracy rate of 90.15% in the final epoch. It is particularly noteworthy that, in the face of complex musical data, the advantages of AC-MGME over other reinforcement learning models (such as DDPG and DQN) are further amplified. It successfully surpasses the powerful SOTA model MuseNet in the later stages of training. This result strongly proves that the design of the AC-MGME model is not overfitted to a single type of piano music, but possesses the core ability to migrate and generalize to a wider and more diverse multi-instrument environment, laying a solid foundation for its application in real and variable music education scenarios.

    To verify whether the generalization ability of the AC-MGME model across a wider range of instruments is statistically significant, a two-sample t-test is similarly conducted on the accuracy results of each model at the final epoch (Epoch 100) on the MuseScore dataset, as shown in Table 5.

    Table 5 AC-MGMEThe statistical significance test (t-test) results of the final accuracy of the model and the comparison model on the musescore dataset.

    In Table 5, the test results indicate that the performance advantage of the AC-MGME model is statistically significant. Even in comparison with its strongest competitor, MuseNet, its p-value (0.042) is below the 0.05 significance level. While the differences from models such as Abouelyazid (2023), DDPG, and DQN are even more pronounced (p < 0.001). This strongly proves that the leading position of this model on diverse, multi-instrument datasets is not accidental. More importantly, this conclusion fundamentally confirms the robustness and generality of the AC-MGME framework, indicating that it is not limited to the generation of single piano melodies but can effectively learn and adapt to the melodic characteristics of a wider range of instruments, thus having application potential in more diverse music education scenarios.

    To evaluate the deployment potential of the model in real teaching scenarios, a dedicated test on inference performance and hardware resource consumption is conducted. The model’s performance is assessed not only on high-performance servers but also deploys on a typical low-power edge computing device (NVIDIA Jetson Nano) to simulate its operation on classroom tablets or dedicated teaching hardware. The comparison of inference performance and resource consumption of each model on high-performance GPUs and edge devices is shown in Fig. 8.

    Fig. 8
    figure 8

    Comparison table of reasoning performance and resource occupation of each model on high-performance GPU and edge devices.

    In Fig. 8, an analysis of the inference performance and resource consumption test reveals the significant advantages of the proposed AC-MGME model in practical deployment. In the high-performance GPU (NVIDIA Tesla V100) environment, AC-MGME not only demonstrated the fastest inference speed (15.8 milliseconds) but also had a GPU memory footprint (350 MB) far lower than all comparison models. Particularly when compared with the heavyweight Transformer model MuseNet (2850 MB), it highlighted the advantages of its lightweight architecture. More crucially, in the test on the low-power edge device (NVIDIA Jetson Nano) simulating real teaching scenarios, the average inference latency of AC-MGME was only 280.5 milliseconds, fully meeting the requirements of real-time interactive applications.

    Two objective indicators, namely Pitch Distribution Entropy and Rhythmic Pattern Diversity, are further introduced to quantify the musical diversity and novelty of the generated melodies. This helps evaluate whether the model can generate non-monotonous and creative musical content. Among them, Pitch Distribution Entropy measures the richness of pitch usage in a melody. A higher entropy value indicates that the pitches used in the melody are more uneven and unpredictable, usually implying higher novelty. Rhythmic Pattern Diversity calculates the unique number of different rhythmic patterns (in the form of n-grams) in the melody. A higher value indicates richer variations in the rhythm of the melody. The comparison results and statistical analysis of the objective musicality indicators of the melodies generated by each model are shown in Table 6.

    Table 6 Comparison and statistical significance test (t-test) results of pitch distribution entropy and rhythm diversity between AC-MGME model and each comparison model on musescore dataset.

    Table 6 reveals the in-depth characteristics of each model in terms of musical creativity, and its results provide more inspiring insights beyond the single accuracy indicator. As expected, MuseNet, as a large-scale generative model, obtains the highest scores in both Pitch Distribution Entropy and Rhythmic Pattern Diversity, and statistical tests shows that its leading advantage is significant (p < 0.05), which proves its strong ability in content generation and innovation. However, a more crucial finding is that the AC-MGME model proposed in this study not only demonstrates highly competitive diversity but also significantly outperforms all other reinforcement learning-based comparison models in both indicators (p < 0.01). This series of results accurately indicates that the AC-MGME model proposed in this paper does not pursue unconstrained and maximized novelty, but rather achieves much higher musical diversity and creativity than similar DRL models on the premise of ensuring the rationality of musical structures. This good balance between “controllability” and “creativity” is an important reason why it obtained high scores in subsequent subjective evaluations, especially in “teaching applicability”.

    To evaluate the subjective artistic quality and educational value that cannot be captured by technical indicators, a double-blind perception study is conducted. 30 music major students and 10 senior music teachers with more than 5 years of teaching experience are invited as expert reviewers. The reviewers score the melody segments generated by each model anonymously on a 1–5 scale (higher scores indicate better performance) without knowing the source of the melodies. The user feedback results under the proposed model algorithm are further analyzed, including the scores (1–5 points) in three aspects: the use experience, the learning effect and the quality of the generated melody. The comparison results with traditional music teaching and learning are shown in Fig. 9.

    Fig. 9
    figure 9

    Comparison chart of user feedback results.

    In Fig. 9, according to the feedback from users, the satisfaction of AC-MGME model is higher than that of traditional music teaching. Especially in the aspect of melody quality, AC-MGME gets a high evaluation of 4.9 points, which is significantly better than the traditional teaching of 3.7 points. In addition, AC-MGME also performs well in terms of experience and learning effect, with scores of 4.8 and 4.6 respectively, far exceeding the scores of 3.6 and 3.9 in traditional teaching. This shows that AC-MGME model not only improves the learning effect and student experience, but also provides higher quality results in melody creation.

    The expert evaluation results of subjective quality of melodies generated by each model are shown in Table 7, and the statistical analysis results are shown in Table 8.

    Table 7 Subjective quality expert evaluation results of melodies generated by each model.
    Table 8 Statistical significance test (t-test) results of subjective evaluation between AC-MGME model and each comparison model.

    The results of Tables 7 and 8 show that, in the dimension of artistic innovation, MuseNet achieves the highest score with its strong generative capability, and its leading advantage is statistically significant (p = 0.008), which is completely consistent with the conclusion of the objective musicality indicators. However, in terms of melodic fluency, AC-MGME won with a slight but statistically significant advantage (p = 0.041), and expert comments generally considered its melodies to be “more in line with musical grammar and more natural to the ear”. The most crucial finding comes from the core dimension of teaching applicability, where the AC-MGME model obtained an overwhelming highest score (4.80), and its advantage over all models including MuseNet is highly statistically significant (p < 0.001). The participating teachers pointed out that the melodies generated by AC-MGME are not only pleasant to listen to, but more importantly, “contain clear phrase structures and targeted technical difficulties, making them very suitable as practice pieces or teaching examples for students”. This series of findings strongly proves that while pursuing technical excellence, this model more accurately meets the actual needs of music education, and can generate educational resources that combine artistic quality and practical value. This is a unique advantage that cannot be matched by models that simply pursue novelty or accuracy.

    Discussion

    The results of this study clearly demonstrate the comprehensive advantages of the AC-MGME model across multiple dimensions. In terms of objective performance, the model not only outperforms all comparison benchmarks, including state-of-the-art models, in accuracy and F1 score, but also confirms the reliability of this advantage through strict statistical significance tests (p < 0.05). More importantly, in the subjective quality evaluation, AC-MGME achieved an overwhelming highest score in “teaching applicability”, indicating that it does not simply pursue technical indicators, but precisely meets the core needs of music education—generating musical content that combines structural rationality, artistic fluency, and teaching practical value. In addition, through deployment tests on low-power edge devices, this study is the first to empirically prove that while ensuring high-quality generation, the model has great potential for efficient and low-latency deployment in real classroom environments, laying a solid foundation for its transition from theory to application.

    This study indicates that the proposed AC-MGME model exhibits significant performance in terms of melody generation quality, learning effectiveness, and user experience. In terms of melody generation quality, the AC-MGME model scores 4.9/5, which is higher than that of traditional music teaching, demonstrating its strong ability to generate melodies with both artistic and technical merits. Meanwhile, AC-MGME also performs well in learning effectiveness, with a score of 4.6/5, which is higher than traditional teaching, proving its effectiveness in generating personalized learning paths and improving students’ skills. In terms of user experience, AC-MGME scores 4.8/5, also higher than traditional teaching (3.6/5), further verifying the advantages of the interactive and convenient teaching system based on DRL. This is consistent with the findings of Dadman et al. (2024)34 and Udekwe et al. (2024)35. Particularly in terms of generation time, AC-MGME only takes 2.69 s to generate a melody, while other models such as DQN require 8.54 s. AC-MGME not only improves the generation quality but also significantly enhances the generation efficiency. In addition, the model performs excellently in terms of generation quality (with an accuracy rate of 95.95% and an F1 score of 91.02% on the LAKH MIDI dataset), and its generation quality is higher than that of other tested models, supporting the feasibility of real-time applications. This is consistent with the research of Chen et al. (2024)36.

    Therefore, the proposed model algorithm can efficiently generate melody and provide personalized learning experience. By dynamically adjusting the melody generation strategy, AC-MGME can optimize the generated content in real time according to students’ different needs and learning progress, which greatly improves the intelligence and personalization level of music education and provides valuable practical basis for the development of AI-driven music education tools in the future.

    However, while affirming these achievements, people must carefully recognize the limitations and potential biases in the research. Firstly, in terms of datasets, although the introduction of the MuseScore dataset has greatly expanded the diversity of instruments, the content of both datasets still mainly focuses on Western tonal music. This may lead to poor performance of the model when generating non-Western music or modern atonal music, resulting in an “over-representation” bias in a broader cultural context. Secondly, the size of the user sample is also a limiting factor. Although the expert review panel composed of 40 music professionals has provided valuable in-depth insights, this scale is not sufficient to fully represent the diverse perspectives of global music educators and learners. Therefore, although the results of this study are robust within the test framework, caution is still needed when generalizing them to all music cultures and educational systems, and more localized verifications should be conducted.

    Finally, the application of such AI technologies in the field of education will inevitably raise ethical issues that require serious attention. A core concern is the potential risk of abuse, particularly music plagiarism. The model may learn and reproduce copyrighted melody segments during training, thereby triggering intellectual property issues. To mitigate this risk, future system iterations must integrate plagiarism detection algorithms, for example, by comparing generated content with n-gram sequences in the training set, and design corresponding reward mechanisms to encourage originality. Another equally important ethical issue is the privacy and security of student data. While tracking and analyzing students’ practice data can enable personalized teaching, it also involves sensitive personal performance information. To address this, strict data management strategies must be adopted, including anonymizing and aggregating all data, ensuring system design complies with relevant regulations such as the General Data Protection Regulation (GDPR), and fully disclosing the content, purpose, and usage of data collection to students, parents, and teachers. These measures aim to build a trustworthy and responsible intelligent music education ecosystem.

    Continue Reading

  • ChatGPT boss warns against relying on AI as primary source of information: Here’s why – Mint

    1. ChatGPT boss warns against relying on AI as primary source of information: Here’s why  Mint
    2. Women with AI ‘boyfriends’ mourn lost love after ‘cold’ ChatGPT upgrade  Al Jazeera
    3. Did the system update ruin your boyfriend? Love in a time of ChatGPT | Arwa Mahdawi  The Guardian
    4. Is AI hitting a wall?  Financial Times
    5. GPT-5’s Voice Mode Can Hold a Decent Conversation, but Please Don’t Talk to ChatGPT in Public  CNET

    Continue Reading