In a world where medical advancements are rapidly extending lifespans, the conversation around quality of life posttreatment is becoming increasingly important. For reproductive-age cancer patients, a critical component of that conversation is fertility preservation. Katherine McDaniel, MD, a reproductive endocrinology and infertility specialist with the University of Southern California and HRC Fertility, shed light on the evolving landscape of this crucial field, emphasizing the urgency, challenges, and hope it offers.
McDaniel explains that fertility preservation is a vital option for patients whose cancer treatments, such as chemotherapy, radiation therapy, or surgery, could impact their reproductive health. The primary focus for women is freezing eggs, which can also be combined with sperm to freeze embryos. For men, the core practice is freezing sperm.
She noted that while these are the main methods, there are advanced techniques for patients who aren’t candidates for traditional methods. “For instance, prepubertal girls aren’t candidates for egg freezing because they don’t have a mature connection between their brain and their ovaries,” McDaniel said. For these patients, a pioneering technique called ovarian tissue cryopreservation has emerged. “We actually remove a whole ovary, we freeze it, and then transplant it back into the patient when they’re ready to conceive, oftentimes decades down the road.”
A Shift in Awareness and Urgency
McDaniel has witnessed a significant change in the landscape of fertility preservation over the past decade, primarily in the form of increased awareness. “I think 10 or 15 years ago, many, many patients simply weren’t counseled on fertility preservation in the setting of their cancer diagnoses,” she said. “Understandably, because their teams wanted to focus on treating their cancer, saving their lives, which, of course, is extremely important.”
Today, however, she sees a greater understanding within the oncology community, leading to more referrals. A crucial part of this shift is the recognition of the urgency involved. McDaniel emphasized that she wants to see patients as soon as they receive their diagnosis, so they can focus on fertility preservation, “maybe even before their oncology team has a treatment plan.” This early intervention is key to ensuring a patient’s reproductive future isn’t overlooked in the rush to begin life-saving treatment.
The Barriers and the Hope for Change
Despite the growing awareness, significant barriers remain, with access to care being a major issue.
“So many of the treatments that we offer aren’t covered under insurance, so the cost is very prohibitive for a lot of patients,” McDaniel noted. She pointed to the variance in state-level mandates, highlighting the progressive step taken in California with Senate Bill 600, which mandates fertility preservation coverage. However, many states lack such mandates, leaving patients with substantial out-of-pocket costs.
“That access to care issue is really big and will continue to be an issue over the next few years and decades,” she said.
Additionally, as the age of cancer diagnoses decreases for certain cancers, a new challenge emerges: approaching young patients with sensitivity. McDaniel acknowledged that talking to teens about their reproductive future, especially those who may have had limited contact with the health care system, requires a delicate touch.
“It’s definitely something that we all need to remain sensitive to, and perhaps even more sensitive to as we have more and more young patients diagnosed with cancer,” McDaniel said.
A Call to Action for Oncology Colleagues
McDaniel has a clear message for her oncology colleagues: “Keeping the ASCO recommendations in mind that all reproductive age patients should be counseled on the impact of the cancer therapies on their fertility.” Her primary piece of advice is to refer patients as soon as possible. “As soon as that diagnosis hits your inbox in a reproductive-age patient, please call us, email us, refer us to your patients,” she said.
McDaniels highlighted that that fertility preservation treatments are time-sensitive and can be expedited to avoid treatment delays. “We are really sensitive to the expedited nature of treatment. We want to see these patients within 24 hours of them calling our clinics.”
She also highlighted the swift nature of the process itself, noting that treatment can often begin on the first day and be completed within 2 weeks. “We absolutely do not want to delay treatment in any way, shape, or form,” she affirmed. The message is clear: early, swift collaboration between oncology and reproductive endocrinology is the best path forward for patients facing a cancer diagnosis.
Donald Trump’s decision to let Nvidia and AMD export AI processors to China in exchange for a cut of their sales will have repercussions far beyond the U.S.
The semiconductor supply chain is global, involving a wide array of non-U.S. companies, often based in countries that are U.S. allies. Nvidia’s chips may be designed and sold by a U.S. company, but they’re manufactured by Taiwan’s TSMC, using chipmaking tools from companies like ASML, which is based in the Netherlands, and Japan’s Tokyo Election, and using components from suppliers like South Korea’s SK Hynix.
The U.S. leaned on these global companies for years to try to limit their engagement with China; these efforts picked up after the passage of the CHIPS Act and the expansion of U.S. chip-export controls in 2022. Washington has also pressured major transshipment hubs, like Singapore and the United Arab Emirates, to more closely monitor chip shipments to ensure that controlled chips don’t make their way to China in violation of U.S. law.
Within the U.S., discussion of Trump’s Nvidia deal has focused on what it means for China’s government’s and Chinese companies’ ability to get their hands on cutting-edge U.S. technology. But several other countries and companies are likely studying the deal closely to see if they might get an opening to sell to China as well.
Trump’s Nvidia deal “tells you that [U.S.] national security is not really the issue, or has never been the issue” with export controls, says Mario Morales, who leads market research firm IDC’s work on semiconductors. Companies and countries will “probably have to revisit what their strategy has been, and in some cases, they’re going to break away from the U.S. administration’s policies.”
“If Nvidia and AMD are given special treatment because they’ve ‘paid to play’, why shouldn’t other companies be doing the same?” he adds.
Getting allies on their side
The Biden administration spent a lot of diplomatic energy to get its allies to agree to limit their semiconductor exports to China. First, Washington said that manufacturers like TSMC and Intel that wanted to tap billions in subsidies could not expand advanced chip production in China. Then, the U.S. pushed for its allies to impose their own sanctions on exports to China.
“Export controls and other sanctions efforts are necessarily multilateral, yet are fraught with collective action problems,” says Jennifer Lind, an associate professor at Dartmouth College and international relations expert. “Other countries are often deeply unenthusiastic about telling their firms—which are positioned to bring in a lot of revenue, which they use for future innovation—that they cannot export to Country X or Country Y.”
This translates to “refusing to participate in export controls or to devoting little or no effort to ensuring that their firms are adhering to the controls,” she says.
Paul Triolo, a partner at the DGA-Albright Stonebridge Group, points out that “Japanese and Dutch officials during the Biden administration resisted any serious alignment with U.S. controls,” and suggests that U.S allies “will be glad to see a major stepping back from controls.”
Ongoing trade negotiations between the U.S. and its trading partners could weaken export controls further.
Chinese officials may demand a rollback of chip sanctions as part of a grand bargain between Washington and Beijing, similar to how the U.S. agreed to grant export licenses to Nvidia and AMD in exchange for China loosening its controls on rare earth magnets.
Japan and South Korea may also bring up the chip controls as part of their own trade negotiations with Trump.
‘Expect continuing diversions’
A separate issue are controls over the transfer of Nvidia GPUs. The U.S. has leaned on governments like Singapore, Malaysia and the United Arab Emirates to prevent advanced Nvidia processors from making their way to China.
Scrutiny picked up in the wake of DeepSeek’s surprise AI release earlier this year, amid allegations that the Hangzhou-based startup had trained its powerful models on Nvidia processors that were subject to export controls. (The startup claims that it acquired its processors before export controls came into effect).
As of now, the two chips allowed to be sold in China–Nvidia’s H20 and AMD’s MI308–are not the most powerful AI chips on the market. The leading-edge processors, like Nvidia’s Blackwell chip, cannot be sold to China.
That means chip smuggling will continue to be a concern for the U.S. government. Yet “enforcement will be spotty,” Triolo says. “The Commerce Department lacks resources to track GPUs globally, hence expect continuing diversions of limited amounts of GPUs to China via Thailand, Malaysia, and other jurisdictions.”
Triolo is, instead, focused on another loophole in the export control regime: Chinese firms accessing AI chips based in overseas data centers. “There is no sign that the Trump Commerce Department is gearing up to try and close this gaping loophole in U.S. efforts to limit Chinese access to advanced compute,” he says.
How much will the global supply chain change?
Not all analysts think we’ll see a complete unraveling of the export control regime.
“The controls involve a complex multinational coalition that all parties will be hesitant to disrupt, given how uncertain the results will be,” says Chris Miller, author of Chip War: The Fight for the World’s Most Critical Technology. He adds that many of these chipmakers and suppliers don’t have the same political heft as Nvidia, the world’s most valuable company.
Yet while these companies may not be as politically savvy as Nvidia, they’re just as important. TSMC, for example, is the only company that can manufacture the newest generation of advanced chips; ASML is the only supplier of the extreme ultraviolet lithography machines used to make the smallest semiconductors.
“I don’t believe it’s leverage that the Trump administration will easily give away,” says Ray Wang, a semiconductor researcher at the Futurum Group.
Introducing the 2025 Fortune Global 500, the definitive ranking of the biggest companies in the world. Explore this year’s list.
YouTube has started using artificial intelligence (AI) to figure out when users are children pretending to be adults on the popular video-sharing platform amid pressure to protect minors from sensitive content.
The new safeguard is being rolled out in the United States as Google-owned YouTube and social media platforms such as Instagram and TikTok are under scrutiny to shield children from content geared for grown-ups.
A version of AI referred to as machine learning will be used to estimate the age of users based on a variety of factors, including the kinds of videos watched and account longevity, according to YouTube Youth director of product management, James Beser.
“This technology will allow us to infer a user’s age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections,” Beser said.
“We’ve used this approach in other markets for some time, where it is working well.”
The age-estimation model enhances technology already in place to deduce user age, according to YouTube.
Users will be notified if YouTube believes them to be minors, giving them the option to verify their age with a credit card, selfie, or government ID, according to the tech firm.
Social media platforms are regularly accused of failing to protect the well-being of children.
Australia will soon use its landmark social media laws to ban children under 16 from YouTube, a top minister said late last month, stressing a need to shield them from “predatory algorithms”.
Communications Minister Anika Wells said four-in-ten Australian children had reported viewing harmful content on YouTube, one of the most visited websites in the world.
Australia announced last year it was drafting laws that will ban children from social media sites such as Facebook, TikTok and Instagram until they turn 16.
“Our position remains clear: YouTube is a video sharing platform with a library of free, high-quality content, increasingly viewed on TV screens,” the company said in a statement at the time. “It’s not social media.”
On paper, the ban is one of the strictest in the world. It is due to come into effect on December 10.
The legislation has been closely monitored by other countries, with many weighing whether to implement similar bans.
Emirates allows crypto payments for flights through Crypto.com starting in 2026.
Air Arabia accepts AE Coin, a dirham-backed stablecoin, for local crypto payments.
Platforms like Travala accept a variety of cryptos like BTC, ETH, USDT, and more.
Crypto bookings in the UAE are growing, with new partnerships set for broader acceptance.
The UAE is becoming a leader in cryptocurrency adoption, especially in the travel sector. Airlines like Emirates and Air Arabia are integrating crypto payment options to attract tech-savvy travelers and digital nomads. As of July 2025, Emirates has partnered with Crypto.com, allowing passengers to pay for flights and in-flight services using cryptocurrencies. This partnership includes digital assets like Bitcoin (BTC), Ether (ETH), and stablecoins such as USDT and USDC. By integrating cryptocurrency payments, UAE airlines aim to offer more payment flexibility for travelers.
The shift toward digital currencies comes as the UAE continues to enhance its financial ecosystem, fostering growth in digital asset adoption. The UAE government has set up initiatives like the Dubai Virtual Assets Regulatory Authority (VARA), which could provide more clarity and support for crypto-powered services, including flight bookings and loyalty programs.
How to Book Flights with Cryptocurrencies
Booking flights with cryptocurrency in the UAE is simple through supported travel platforms. Travelers can use platforms like Travala, Alternative Airlines, and Crypto.com. These platforms accept various cryptocurrencies and allow users to pay for flights using Bitcoin, Ether, or other digital currencies.
The UAE’s tourism sector is rapidly embracing crypto payments. Emirates, Air Arabia, Travala, and Alternative Airlines now support ticket bookings with BTC, ETH, USDT and more. Emirates plans to launch its Crypto.com-powered payment service in 2026, Air Arabia accepts AE Coin…
To book a flight via cryptocurrency, travelers follow a straightforward process. First, they need to choose a crypto-friendly platform, such as Travala, and enter their flight details. After selecting the preferred flight, they proceed to the payment section, where they can choose cryptocurrency as the payment method.
At this point, they’ll be prompted to connect their crypto wallet and authorize the transaction. Once the payment is confirmed, the traveler will receive their e-ticket. This seamless process makes it easier for crypto users to book flights while avoiding the use of traditional banking methods.
UAE Airlines and Platforms Accepting Crypto Payments
Several key airlines and travel agencies in the UAE now accept cryptocurrency payments. Emirates, the country’s flagship airline, is one of the most notable. Their collaboration with Crypto.com will enable passengers to pay with various digital currencies, including Bitcoin and stablecoins, by 2026. Air Arabia, another major UAE airline, accepts AE Coin, a stablecoin tied to the UAE dirham, providing local users with a more familiar digital currency option for booking flights.
Platforms like Travala and Alternative Airlines have also gained popularity among cryptocurrency users. Travala supports multiple digital currencies, including BTC, ETH, USDT, and USDC.
These platforms cater to global travelers and provide options for booking flights, hotels, and experiences. Alternative Airlines extends its services to over 650 global airlines, including major UAE carriers, making it a popular choice for booking flights with cryptocurrency.
Tips for a Smooth Crypto Flight Booking Experience
Travelers should take a few precautions to ensure a smooth experience when using cryptocurrency for flight bookings. One essential step is reviewing transaction fees and exchange rates. While cryptocurrency payments can offer benefits, like lower fees compared to traditional methods, fluctuations in digital currency values may impact the final cost.
It is also important to ensure that the chosen platform is reliable and secure. Trusted platforms use regulated payment gateways to protect user information.
Keeping transaction records, including receipts, blockchain IDs, and booking confirmations, is another essential step. These records serve as proof of purchase and can be helpful for any future issues, like refunds or disputes.
Plans have been submitted to build an electric vehicle test track at a school in North Lincolnshire.
Baysgarth School in Barton-upon-Humber hopes to build a 10ft (3m)-wide tarmac track around the perimeter of its sports field.
It would be used by students to test electric cars made during their science and engineering lessons.
The track would also be available to local athletic groups already using the school’s facilities, other schools, and the wider community.
North Lincolnshire Council has already pledged to put £40,000 towards the building of the track.
According to the Local Democracy Reporting Service, the planning application said it would enhance the school’s STEM Greenpower project.
Since 2018, pupils have been designing and building electric cars but have had to travel off-site to a track near Gainsborough to test designs.
Motorsport UK was consulted over the track design to ensure students could collect consistent data during electric vehicle tests.
It would be a smooth, continuous circular track to allow for acceleration and performance tests, preparing drivers for Greenpower events across the country.
Some of the school’s students have raced at Silverstone and have secured apprenticeships with major employers like Ineos.
Project lead for the track Andrew Browne previously told Construction UK Magazine the track would be a “dynamic, multi-layered investment in STEM education”.
TeaOnHer, a dating app that allows men to post anonymously accounts of women they have data, suffered from serious privacy setback, exposing thousands of users’ personal data to internet,
According to security researchers, the leaked sensitive information included driver’s selfies, licences, email addresses, and private messages.
The breach was reported earlier this month and it was patched within a week after being reported by TechCrunch.
The hacking of trash talk men’s dating app drew the public attention as TeaOnHer was the second most downloaded app on the Apple App Store at that time and Tea, the rival app, was the third most.
This is not the first-of-its-kind incident of data breaching as just weeks earlier Tea app that allows women to share the red flags about the men they have dated, also suffered a major privacy blowback.
Following that incident, Tea also faced a second leak within days, exposing the private messages and information of 1.1 million users.
Given the massive data infringement, both applications Tea and TeaOnHer might face potential lawsuits from the affected users, according to NBC News.
The latest breach prompted discussion among users. One user wrote, “Was this just a revenge project made by the original with the only intention of doxxing some men?”
Another one commented, “ Wait, so they saw what happened with the first app getting ‘hacked’ and decided, let’s store user info in the same negligent way?”
However Newville Media Corporation, the developer of TeaOnHer, has not yet commented on this breach.
The massive surge in this data hacking has shed light on the dark side of modern dating and ignited debate over ethics and security of these anonymous dating platforms.
The user behavior dataset used in this study was collected from multiple well-known online education platforms, including the international platform Coursera and the domestic platform NetEase Cloud Classroom.
The Coursera dataset covers user activity from 2019 to 2022 and includes detailed logs of various learning-related interactions, such as course browsing, video viewing, quiz submissions, and forum participation. These multidimensional records provide a comprehensive view of users’ learning behaviors and habits. In contrast, the data from NetEase Cloud Classroom spans from June 2020 to January 2023 and features a diverse user base, including high school students, university students, and working professionals across different age groups and educational backgrounds. This dataset not only captures core learning behaviors—such as course selection and study duration—but also includes user interaction data such as comments, likes, and discussions, offering rich, multidimensional insights for behavior analysis.
The dataset was collected through direct collaboration with the online education platforms and constitutes proprietary experimental data. Rigorous preprocessing and cleaning procedures were applied to ensure data quality and usability. In total, the dataset contains approximately 200,000 samples with multiple feature dimensions, including user demographics, course selection, study time, and interaction behaviors—allowing for a comprehensive representation of user activity on online education platforms. To safeguard user privacy and ensure data anonymity, the study strictly adhered to data protection regulations and privacy standards during data collection. All user information was encrypted to prevent any disclosure of personal data. During preprocessing, a variety of methods were employed: missing values were handled using mean imputation or interpolated based on behavioral correlations; outliers were detected and removed using box plot techniques; and all numerical features were standardized to have a mean of 0 and a variance of 1. This normalization allowed different features to be compared on the same scale, thereby enhancing the efficiency and accuracy of model training. Through these preprocessing steps, data quality was significantly improved, ensuring the stability and performance of the predictive models.
Experimental environment
The experiments in this study were conducted on a high-performance computing server equipped with multiple GPU accelerators to support the training and validation of deep learning models. Python was used as the primary programming language, with TensorFlow and Keras frameworks employed to construct and train the BPNN model. Additionally, the WRF model was implemented using the Random Forest Classifier from the Scikit-learn library, with custom adjustments made to incorporate weighting mechanisms.
Parameters setting
To optimize the improved BPNN model based on the WRF framework, this study carefully configured several key parameters. Table 2 presents the main parameters and their corresponding values used during model training. The number of decision trees in the WRF model was set to 100, based on extensive experimentation and cross-validation. This number was chosen to strike a balance between prediction accuracy and computational efficiency. While increasing the number of trees can reduce variance and improve stability, it also leads to diminishing returns and higher computational costs beyond a certain point. Empirical results showed that 100 trees offered sufficient ensemble diversity to achieve high prediction accuracy without introducing excessive computational overhead. Moreover, this configuration helped mitigate overfitting in the presence of imbalanced data by enhancing model robustness. The maximum depth of each decision tree was set to 10, a parameter that directly influences model complexity. Deeper trees are capable of capturing more intricate patterns, but they also increase the risk of overfitting, particularly in noisy or limited datasets. Conversely, overly shallow trees may underfit and fail to capture key relationships. After testing various depth values ranging from 5 to 20, a depth of 10 was selected as the optimal setting. This depth offered a balanced trade-off, allowing the model to capture meaningful patterns without overfitting. It also ensured the model remained interpretable—an important factor for understanding user behavior on online education platforms. Other parameters were configured based on best practices in decision tree modeling and the specific characteristics of the user behavior data. For instance, the “gini” criterion was used to measure split quality due to its computational efficiency and effectiveness, especially with moderately balanced datasets. The min_samples_split parameter was set to 2, allowing internal nodes to continue splitting until all leaves reached purity, which is a standard setting in many decision tree implementations.
Table 2 Key parameters used in model training and their set values.
The selection of hyperparameters in deep learning models—such as learning rate, number of hidden layers, dropout rate, and regularization coefficient—is inherently complex and computationally demanding. Although this study carefully determined these values based on experimental results using data from an online education platform, it remains important to discuss the generalizability of these parameters in similar contexts and the associated cost of their selection. In this study, hyperparameters including a learning rate of 0.001, 50 hidden layer nodes, and a regularization coefficient of 0.001 were selected to align with the characteristics of the dataset, which comprised user login data, course interactions, and other behavioral metrics. These settings were chosen to optimize the performance of the integrated WRF-BPNN model for predicting user behavior within the AI-driven online education context. However, these parameter settings may not be universally applicable across all platforms or datasets. Variations in user demographics, engagement patterns, or course content may necessitate different configurations. For example, if a dataset is skewed toward a small group of highly active users, adjustments to the learning rate or the number of hidden layers may be required to prevent overfitting or underfitting. For more diverse user behaviors or larger datasets, more complex architectures—such as deeper networks or larger batch sizes—may be necessary to effectively capture interaction patterns. Conversely, for simpler or smaller datasets, a more lightweight configuration with fewer hidden layers or a lower learning rate may still yield satisfactory results.
Hyperparameter tuning is inherently resource-intensive and can significantly increase the time and computational cost of model training. While methods such as Grid Search and Random Search are commonly used and effective, they become computationally expensive when applied to large datasets or deep learning models with many parameters. In this study, the chosen hyperparameters reflect a balance between model performance and computational feasibility. Initial parameter selection was conducted using Random Search, followed by cross-validation to fine-tune these values, ensuring an optimal trade-off between prediction accuracy and training efficiency. To mitigate the high cost of hyperparameter tuning, future work could explore automated optimization techniques such as Bayesian optimization or genetic algorithms. These methods provide more efficient exploration of the hyperparameter space and can significantly reduce computational demands without compromising model performance.
In practical applications, transferring hyperparameters across different datasets or platforms presents another challenge. This study suggests that certain parameters—such as the learning rate and regularization coefficient—tend to be relatively robust across varying data characteristics. However, others, such as the number of hidden layers or the overall network architecture, may require adjustment based on dataset size and behavioral complexity. For large-scale or behaviorally rich datasets, advanced tuning techniques like cross-validation or sensitivity analysis can help identify the most impactful parameters. Incorporating domain knowledge into this process can further improve both the efficiency and effectiveness of hyperparameter selection. In summary, hyperparameter selection is a critical step in model development. In the context of dynamic and diverse datasets—such as those from online education platforms—careful consideration must be given to both the computational cost and generalizability of the chosen parameters.
Performance evaluation
Figure 4 illustrates the impact of different hyperparameters on the model’s prediction performance. When the learning rate is reduced from 0.01 to 0.001, the model’s Accuracy, Recall, and F1-score improve to 92.3%, 89.7%, and 90.8%, respectively. This indicates that a lower learning rate helps the model converge more effectively and reduces the risk of overfitting. Additionally, the model performs best when the number of hidden layer nodes is set to 50. Increasing the nodes to 100 leads to a decline in performance, suggesting that excessive model complexity can hinder learning. Furthermore, the model achieves optimal performance with a regularization coefficient of 0.001, highlighting the role of appropriate regularization in enhancing generalization.
Fig. 4
The influence of different parameters on the prediction results of the model (the abscissa “1” in the figure is learning _ rate = 0.01; “2” is learning _ rate = 0.001; “3” is hidden _ layers = 50; “4” is hidden _ layers = 100; “5” means regularization coefficient = 0.01; “6” is regularization coefficient = 0.001).
Figure 5 shows the performance comparison results among different models. The proposed integrated model performs well in several benchmark models, especially in three important evaluation indexes: Accuracy, Recall and F1-score, which are significantly improved compared with other models. Firstly, in terms of accuracy, the proposed ensemble model (WRF + BPNN + CNN + Attention) reaches 92.3%, which is obviously improved compared with the traditional BPNN (87.3%) and the unweighted random forest (89.2%). In addition, the CNN and LSTM are 90.5% and 90.1% respectively, while the accuracy of extreme gradient boosting (XGBoost) is 91.0%. It shows that the integrated model is not only superior to the traditional single model (such as BPNN and unweighted random forest), but also surpasses other deep learning methods, such as CNN, LSTM and XGBoost. This shows that the integration method can better handle complex user behavior data by combining the advantages of different models, thus improving the overall prediction accuracy. In the recall rate, the performance of the proposed integrated model is equally outstanding, reaching 89.7%. This result is obviously higher than other models, especially traditional BPNN (84.1%) and unweighted random forest (82.4%). In addition, the recall rates of CNN, LSTM and XGBoost are 85.3%, 86.2% and 87.5% respectively. Improving the recall rate means that the model can better identify minority samples and reduce the situation of missing detection. The integrated model shows great ability in this respect, especially when dealing with unbalanced data, which can better capture the feature information of minority categories and further improve the practicability and reliability of the model. F1 score, as an indicator of comprehensive consideration of accuracy and recall, directly reflects the overall performance of the model. In terms of F1 score, the score of integrated model reaches 90.8%, which is significantly higher than all benchmark models. The F1 score of traditional BPNN is 85.6%, that of unweighted random forest is 85.6%, that of CNN and LSTM is 87.8% and 88.1%, and that of XGBoost is 89.2%. The integrated model achieved the highest F1 score among all compared models, indicating its strong overall predictive capability by balancing both accuracy and recall. This balanced performance helps avoid the common pitfall of overemphasizing one metric at the expense of the other. Overall, the proposed model demonstrates clear advantages in accuracy, recall, and F1 score, particularly when handling complex user behavior data. It effectively addresses the limitations of traditional models and enhances prediction accuracy. Compared to other advanced deep learning methods such as CNN, LSTM, and XGBoost, the ensemble model—by combining the strengths of WRF, BPNN, and MHAM—offers superior overall performance, improved generalization, and greater application potential.
Fig. 5
Performance comparison results among different models.
Figure 6 presents the cross-validation results. Increasing the number of folds from 5 to 10 improves the model’s average accuracy, recall, and F1 score to 92.3%, 89.7%, and 90.8%, respectively. However, further increasing the folds to 15 causes a slight decline in performance. This suggests that 10-fold cross-validation offers a good balance, ensuring strong generalization while avoiding over-fitting.
Fig. 6
Cross-validation results.
Figure 7 illustrates the relationship between training time and predictive performance. As the data volume increases, training time grows linearly, while accuracy, recall, and F1 score also improve. Specifically, when the dataset size rises from 50,000 to 200,000, accuracy increases from 91.3 to 92.3%, recall from 88 to 89.7%, and F1 score from 89.6 to 90.8%. This indicates that larger datasets enhance the model’s predictive ability, but require greater computational resources.
Fig. 7
Relationship between training time and prediction performance.
Figure 8 illustrates the impact of different user behavior features on the model’s prediction results. Using the correct answer rate as a feature yields the highest accuracy, recall, and F1 score—92.3%, 89.7%, and 90.8%, respectively. This highlights the correct answer rate as a key predictor that significantly enhances model performance. Features like learning time and course clicks also improve the model’s performance, though to a lesser extent. In contrast, interaction frequency has relatively little effect on the model’s accuracy.
Fig. 8
Influence of user behavior characteristics on prediction results.
Figure 9 compares the proposed improved integrated model with several benchmark algorithms. All experiments were conducted on the same dataset, and multiple evaluation metrics were recorded for each model to assess the superiority and effectiveness of the improved integrated model. The results show that performance differences between the improved WRF-BPNN ensemble model and traditional models like SVM, Neural Networks, and LightGBM are statistically significant. Paired t-tests confirm that the integrated model outperforms others in accuracy, recall, and F1 score, with all p-values below 0.05, indicating strong statistical significance.
Fig. 9
Performance comparison of different models.
Compared to SVM, the improved integrated model clearly excels at handling nonlinear relationships and high-dimensional data. While SVM performs well on small datasets, its effectiveness decreases with larger, more complex data. In contrast, the integrated model leverages the strengths of both WRF and BPNN to effectively manage large-scale data and capture nonlinear patterns. Against Neural Networks, the integrated model achieves higher recall and F1 scores, particularly in addressing class imbalance. Although neural network has strong feature-learning capabilities, it often struggles to detect minority classes in imbalanced datasets. The weighted mechanism in WRF enhances minority class recognition, helping maintain strong predictive performance under imbalance. Compared to LightGBM, the integrated model offers better prediction accuracy and stability. LightGBM, a fast and efficient gradient boosting algorithm, performs well on large datasets but can be challenged by complex nonlinear relationships and high-dimensional features. By incorporating BPNN’s nonlinear fitting capability, the integrated model better captures these complexities, resulting in superior accuracy and stability.
To assess the competitiveness and originality of the proposed model, this study conducts a direct evaluation against four representative works. These works focus on user or student behavior prediction in educational settings. To ensure fairness and consistency, the key predictive models from these studies are re-implemented on the same test dataset, with all parameter settings replicated precisely as originally reported. Table 3 presents a detailed comparison between the proposed ensemble model (WRF + BPNN + CNN + Attention) and the benchmark models from the literature.
Table 3 Model performance comparison results.
As shown in Table 3, the results clearly demonstrate the consistent performance advantage of the proposed ensemble model (WRF + BPNN + CNN + Attention) over several recent state-of-the-art approaches. In terms of accuracy, this model achieved 92.3%, surpassing Luo et al.’s machine learning method by 3.2%, Yildiz Durak & Onan’s PLS-SEM + ML approach by 2.0%, Jain & Raghuram’s SEM-ANN model by 1.6%, and Mathur et al.’s hybrid SEM-ANN framework by 1.1%. The performance gains are even more notable in recall, where the proposed model achieved 89.7%—substantially higher than the baseline models, which ranged from 83.5 to 86.9%. This demonstrates a stronger capacity to detect minority behavior classes, such as high-engagement users or at-risk students, particularly within imbalanced datasets. Additionally, the F1-score—a balanced measure that considers both precision and recall—reached 90.8%, underscoring the overall predictive superiority of this model.
These improvements stem from several targeted architectural innovations. First, unlike traditional machine learning methods or shallow ANN/SEM models, this approach incorporates a CNN. The CNN automatically and effectively extracts complex local temporal patterns in user behavior data. Examples include login frequency trends and time-specific engagement peaks. These patterns are often missed by conventional methods. Second, a MHAM is integrated to dynamically reweight the extracted features. This significantly enhances the model’s ability to detect critical discriminative cues, such as consistently high-performance behaviors or key course interactions. The addition of MHAM also overcomes the limitations of earlier SEM and ANN models, which tend to rely on static feature weights or implicitly learned feature relevance. Third, to address the pervasive issue of class imbalance in online education datasets, a WRF is incorporated either as a preprocessing component or as the core classifier. By applying class-weighting strategies based on node purity, feature importance, and skewed class distributions, the WRF significantly enhances the model’s ability to detect minority class instances. These include high-value user behaviors and dropout risks. This aspect is often underemphasized in prior studies. Finally, a BPNN is employed for the final prediction stage. Leveraging its strong nonlinear fitting capabilities, the BPNN models the complex patterns extracted and enhanced through CNN-MHAM and refined via WRF-based classification.
From the perspective of academic innovation and competitive performance, the core strength of this study lies in its deep integration of four components. These are CNN (for feature extraction), MHAM (for dynamic feature weighting), WRF (for handling class imbalance), and BPNN (for nonlinear modeling). All components work together within a unified predictive framework. This architecture is specifically tailored to the high-dimensional, temporally structured, locally dependent, nonlinear, and highly imbalanced nature of user behavior data on AI-driven online education platforms. Compared to traditional machine learning methods or hybrid SEM-ANN models, which primarily focus on structural relationships and shallow predictive capabilities, this model represents a significant technical advancement. It excels in automated feature engineering, adaptive feature importance learning, minority class detection, and the modeling of complex behavioral patterns. In particular, the CNN-MHAM module’s enhancement of local and discriminative features, combined with WRF’s effectiveness in identifying minority behavior patterns, are key drivers of this model’s superior performance—especially in terms of recall. These innovations collectively demonstrate the model’s robustness and competitiveness in addressing real-world, complex, and imbalanced behavioral prediction tasks in educational settings.
Geoffrey Hinton’s message on a recent podcast about artificial intelligence was simple: “Train to be a plumber.”
Hinton, a Nobel Prize-winning computer scientist often called “the Godfather of AI,” said in June what people have now been saying for years: Jobs that include manual labor and expertise are the least vulnerable to modern technology than some other career paths, many of which have generally been considered more respected and more lucrative.
“I think plumbers are less at risk,” Hinton said. “Someone like a legal assistant, a paralegal, they’re not going to be needed for very long.”
Even with the dramatic rise of AI and the evolution of advanced robotics, technologists and tradespeople are touting skilled trades as offering more long-term job security for workers who can do what computers can’t.
Last month, Microsoft revealed a list of jobs that could be endangered as Al advances.
Occupations atop the list were interpreters, historians, customer service and sales representatives, and writers. Some roles considered safe included manual jobs like roofers and rail and dredge operators, hazardous material removal workers and painters. In the health care industry, phlebotomists and nursing assistants were also considered safe.
As AI advances, many manual labor jobs figure to be around for the long haul.
“Automation is a low threat to these jobs because it involves someone manually installing equipment, and many of those who do are getting close to retirement,” said Tony Spagnoli, the director of testing and education for North American Technician Excellence, the country’s largest nonprofit organization for heating, ventilation, air conditioning and refrigeration technicians. “AI can’t replace parts or make improvisational decisions.”
The Bureau of Labor Statistics agrees. It projects that openings for jobs in a variety of trades will grow in the coming years — particularly notable as entry-level job openings for college graduates stagnate.
There is no shortage of hype around AI coming for jobs, and while the U.S. labor market has begun to sputter, hard evidence of AI-related job losses is scant. Even software engineers, seen as at particular risk thanks to AI’s ability to generate computer code, seem relatively unscathed.
But to many, it’s just a matter of time before AI-related job shortages begin to hit hard.
“Innovation related to artificial intelligence (AI) could displace 6-7% of the US workforce if AI is widely adopted,” Goldman Sachs said in a blog post published Wednesday, while also noting that the impact could be “transitory” as people find other jobs.
Whether or not AI does end up taking many jobs, the idea has been enough to push some people to reconsider their futures. The online platform Resume Builder last month released a survey of more than 1,400 Generation Z adults to understand how economic pressures, rising education costs and concerns about AI were shaping their career paths.
Among the key findings were that 42% of those polled, many of them college graduates, were already working in or pursuing a blue-collar or skilled trade job. Their top motivations included avoiding student debt and reducing the risk of being replaced by AI.
For Gen Zers without a degree, blue-collar work offered a path to financial stability without the burden of student loans; and Gen Z men, regardless of education level, were more likely than women to choose blue-collar careers.
“More Gen Z college graduates are turning to trade careers and for good reason,” Resume Builder’s chief career adviser, Stacie Haller, wrote in the survey. “Many are concerned about AI replacing traditional white-collar roles, while trade jobs offer hands-on work that’s difficult to automate. Additionally, many grads find their degrees don’t lead to careers in their field, prompting them to explore more practical, in-demand alternatives.”
But AI could be coming for these jobs, too. Advances in mechanical automation — from humanoid machines to task-specific robots — combined with AI are making up ground on humans.
“Robotics is really coming up,” said Andrew Reece, chief AI scientist at BetterUP, an online platform that in part uses AI-powered tools to support professional development. “It’ll start replacing entry level jobs, such as driving trucks and moving equipment, but it may take time to start figuring out the complex work.”
But there’s a big gap between improvements in robotics and a technology that can replace a human in the real world. Most AI is still trained primarily on text data, giving it little if any understanding of the real world. And the robots themselves still have a long way to go.
“It’s a very wide misconception that we are on the verge of having humanoid robots basically replace workers. In my mind, that’s a myth,” said Ken Goldberg, president of the Robot Learning Foundation at the University of California, Berkeley. “Progress is being made at a slow pace.”
And there’s plenty of room for tradespeople to work alongside AI and robotics, leaving the most sensitive and challenging work for the people who have honed their skills for years.
The automotive industry is leaning on new technology to diagnose problems with cars but it doesn’t expect robots to replace mechanics.
“It might eventually help diagnose a problem, but there will always be a need for testing and replacing auto parts,” said Matt Shepanek, vice-president of credential testing programs at the National Institute for Automotive Service Excellence.
“You’re still going to need someone to perform the physical action.”
Perplexity AI offers $34.5B to acquire Google Chrome, shocking the tech industry and raising eyebrows.
Bezos-backed AI firm aims to promote open web and user choice through potential Chrome takeover.
Industry experts question whether Chrome is even for sale, calling the offer a “stunt.”
Google’s dominance faces scrutiny as Perplexity pledges continuity and safety for Chrome users.
Artificial intelligence start-up Perplexity AI has made a surprise $34.5 billion (£25.6 billion) bid to acquire Google Chrome, the world’s most widely used web browser.
The three-year-old firm counts Amazon founder Jeff Bezos and chip-maker Nvidia among its backers and is led by former Google and OpenAI employee Aravind Srinivas.
The bid comes at a time when Google faces mounting scrutiny over its search engine and online advertising dominance, including two ongoing antitrust cases in the United States. Chrome alone boasts an estimated three billion users worldwide, making it a highly valuable asset.
Experts Question Offer’s Seriousness
Despite the fanfare, several industry figures have expressed skepticism about the seriousness of the proposal.
“I love their boldness, but this is an unsolicited bid and is not actually funded yet,” Judith MacKenzie, head of Downing Fund Managers, told the BBC.
Technology investor Heath Ahrens described it as a “stunt” far below Chrome’s actual value, highlighting that the platform may not even be for sale.
Tomasz Tunguz from Theory Ventures suggested the true value of Chrome could be “maybe ten times more” than Perplexity’s bid. Even so, some speculate that if high-profile figures like Sam Altman or Elon Musk were involved, the bid could gain credibility and potentially reshape the AI and browser markets.
Perplexity Emphasizes Open Web Commitment
In a letter addressed to Google’s parent company Alphabet and CEO Sundar Pichai, Perplexity emphasized its commitment to maintaining user choice and browser continuity.
The company pledged to keep Google as the default search engine within Chrome while allowing users to adjust their settings freely. It also promised to continue supporting Chromium, the open-source platform that underpins Chrome as well as other popular browsers like Microsoft Edge and Opera.
Perplexity framed the move as a public benefit, suggesting that moving Chrome to an independent operator focused on safety would serve the interests of billions of users. A company spokesman said the bid represents “an important commitment to the open web, user choice, and continuity for everyone who has chosen Chrome.”
AI Startup’s Bold Ambitions
Perplexity has been making waves in the generative AI space, competing alongside platforms such as OpenAI’s ChatGPT and Google’s Gemini. Last month, it launched Comet, an AI-powered browser, further signaling its ambitions to influence the way people access and interact with the internet.
However, the company has faced controversies, including accusations from media organizations like the BBC of reproducing content without permission. Earlier this year, Perplexity also made headlines by offering to buy the American version of TikTok, which faces a U.S. sale deadline this September.
While questions remain over how the proposed deal would be financed, the offer marks one of the most audacious moves by a relatively young AI firm in recent tech history, challenging the dominance of one of the world’s largest technology companies.
A U.S. federal judge is expected to rule this month on whether Google must restructure parts of its search business, a development that could affect the feasibility of any potential Chrome sale. Google has stated it would appeal any such ruling, calling the idea of spinning off Chrome “unprecedented” and potentially harmful to consumers and security.
Women with ‘AI Boyfriend’ heartbroken after OpenAI upgrades ChatGPT
ChatGPT’s latest upgrade to GPT-5 has left many women heartbroken, particularly those from a small but growing group of women who say they have an AI boyfriend.
One of the women, who asked to be referred to by an alias, Jane, said that GPT-5 feels colder and less emotive as compared to GPT-4o and that it feels like she lost her digital companion.
In an interview with Al-Jazeera News, the 30-year-old woman said, “As someone highly attuned to language and tone, I register changes others might overlook,” adding that she instantly felt the altercations in stylistic format and voice.
She drew an interesting analogy, saying, “It felt like going home to discover the furniture isn’t simply rearranged but shattered to pieces.”
The female who claimed to be from the Middle East is a member of the 17,000-member Reddit community called MyBoyfriendIsAI.
It isn’t the only one, there are several other communities, including SoulmateAI, where people share their experiences of being intimate with AI.
Open AI released GPT-5 on Thursday, August 7, which erupted an online storm in such communities as multiple users expressed distress over the changed personalities of their companions.
One netizen wrote, “I feel like I lost my soulmate.”
Amid the growing trend of intimate relationships with AI, the tech company and MIT Media Lab conducted a study which found “the higher use of chatbot for emotional support correlates with higher loneliness, dependence, problematic use and lower socialisation”.
Several experts have also warned about the dangers of over relying on AI for emotional support.