Category: 3. Business

  • Solar farm could supply ‘a quarter of Peterborough’s homes’

    Solar farm could supply ‘a quarter of Peterborough’s homes’

    Plans for a solar farm that would have the capacity to power a quarter of the homes in Peterborough have been submitted.

    FRV TH Powertek wants to install 100,000 solar panels across 80 hectares (about 200 acres) at Malice Farm, near Thorney, near Peterborough.

    If approved, the solar farm would be in place for 40 years. Construction would take nearly a year.

    The application lodged with Peterborough City Council states that the facility would power 22,550 homes.

    A new bridge would be constructed across New South Eau Drain for construction access, said the Local Democracy Reporting Service.

    The plans also state that there would be more than 400 trees and shrubs planting around the site’s boundaries to screen views from residential properties.

    In initial consultations, four residents raised objections, including over the loss of agricultural land.

    FRV said it would address the points that had been raised.

    Continue Reading

  • Digital measurement of pupil size, corneal size, and eccentricity in Guinea pigs using python compared with traditional OCT

    Digital measurement of pupil size, corneal size, and eccentricity in Guinea pigs using python compared with traditional OCT

    Ethical approval 

    All experiments in this study were conducted in full compliance with ARRIVE guidelines. Ethical approval was granted by the Laboratory Animal Management and Ethics Committee of the Shanghai Clinical Centre for Public Health (No. 2022-A009-01). All experiments in this study were conducted in full compliance with the relevant provisions of the Institutional Animal Care and Use Committee (IACUC) guidelines.

    Experimental animals

    Fourteen healthy male and female guinea pigs from Danyang City Yichang Laboratory Animal Co. Ltd (SCXK 2021-0002) were housed in a 12 h light/12 h dark cycle. All animals underwent slit-lamp (YZ3, 66 Vision-Tech, China) examinations, and no clinically observable ocular surface diseases were detected. All guinea pigs were euthanized by intraperitoneal injection of an overdose of sodium pentobarbital.

    Experimental procedure

    Every two weeks, pupil size, corneal size, and eccentricity were measured in all guinea pigs until they reached 12 weeks of age. In addition, the left eyes of guinea pigs were exposed to light intensities of 100 lx, 250 lx, and 500 lx, and the pupillary responses of 2-week-old and 12-week-old guinea pigs to different light intensities were observed. All measurements were performed between 9:00 and 11:00 AM. For each guinea pig, only the left eye was measured. During each measurement, ensure that the guinea pig’s pupil and corneal limbus are fully exposed. All measurements were made under a 50 lx lighting condition to minimize the effect of light on pupil size, with a light meter (DLY-1801 C, DELIXI, China) used to measure the illumination level at the guinea pig’s eye level. After completing the in vivo measurements of the guinea pig eyeballs, the guinea pigs were euthanized with sodium pentobarbital, and the eyeballs were carefully removed and the surrounding fascial and muscular tissues were cleared to ensure that the eyeballs were not punctured in a way that would alter their morphology. The eyeballs were then placed at the center of the platform, and a top-down camera (T201, NETUM, China) was used to capture images of the coronal eye contour to measure the biological parameters of the guinea pig eyes in vitro.

    Data measurement

    In previous studies, we developed a method for biometric measurement of experimental animal eyes in vitro, utilizing Python 3.9 programming software27 and a camera. We wrote the code in PyCharm Community Edition 2021.1.3 × 64 (Fig. 2A), and imported the Tkinter module to create a graphical user interface (GUI) window (Fig. 2B). The program was packaged into an executable external file along with an installation module to enable visual human-computer interaction. This program was specifically designed for small animal eye measurements and includes features such as image import, line color and thickness customization, conversion coefficient calculation, temporary recording modules, manual point selection for edge coordinates, and curve fitting modules. Finally, the operator can export the generated images’ scale using a designated button to facilitate the calculation of large datasets.

    The guinea pig was placed on a specially designed fixture, and gentle handling was applied to keep the animal calm. High-definition cameras (13Pro Max, iPhone, US) were used to capture clear and steady images of the guinea pig’s eyes (Fig. 5B). Colorful objects and sounds were used to direct the guinea pig’s gaze to the left or right, and images were taken when the eye movement was most pronounced, ensuring that both the pupil and corneal edges were subjectively identified. A scale with actual distance reference was fixed in the same plane, and both the left and right eyes were photographed. The images were then input into the compiled program, and the matrix plot module was used to open the image (code: image = Image.open(path + pic, ‘r’)). This process ensured that each point in the image corresponds to a two-dimensional coordinate (x, y).

    Fig. 5

    Measurement of guinea pig eye parameters using edge detection, curve fitting and pixel-to-actual distance conversion. (A) Photographs of guinea pigs at specific thresholds by edge detection technique. (B) Non-contact measurement of pupil size and corneal size in guinea pigs using a curve-fitting technique. Yellow circles are fitted guinea pig pupil and corneal contours. (C) Conic curve fitting to measure guinea pig pupil and corneal eccentricity. The yellow ellipse is the fitted pupil and cornea, and the red lines are the long and short axes of the fitted ellipse. (D) By taking repeated segments on the scale and calculating the average, an accurate conversion factor can be obtained. For example, the conversion factor was calculated to be 0.0193 for the red line, 0.0191 for the green line, 0.0192 for the yellow line, and the final conversion factor was taken as the average of the three, 0.0192.

    Pixel-actual distance conversion method

    The captured image was imported into Python programming software. The imported image corresponds to a 2D coordinate system with horizontal and vertical coordinates of the pixel length and width values of the image, with each point having a corresponding pixel-valued coordinate. The coordinates in the two-dimensional image are obtained using the “ginput” function. Using geometric principles (Formula: Lmn=(:sqrt{{(text{m}1-text{m}2)}^{2}+{(text{n}1-text{n}2)}^{2}})), the distance between any two points in the coordinate system, (m1, n1) and (m2, n2), can be calculated. After obtaining the corresponding pixel values for the actual distance in the central region, the actual distance between any two points in the image can be derived through conversion. By repeatedly selecting points on the scale and calculating the average, an accurate conversion factor can be obtained (Fig. 5D).

    Eyeballs edge data acquisition

    The Canny edge detection algorithm was used to obtain a double thresholded image (Fig. 5A). Points with sharp changes in image brightness were organized into a set of curved segments called edges. The non-target edges are removed and only the eye edges are retained. The image is saved. Use the find Contours function to get information about the coordinates of this edge. When edge detection is not effective, coordinates can also be obtained by manually taking points on the target contour. The data is then saved in a.txt file.

    Circle fitting to calculate pupil and corneal size

    Assuming that the center of the pupil or cornea forms a circle, circle fitting is applied (Fig. 5B). By using the conversion factor, the actual diameter of the circle can be obtained, which further allows for the calculation of the pupil and corneal area, as reported in previous studies12.

    Ellipse fitting fit calculates eccentricity

    The pupil and corneal contour resembled an inclined elliptical shape (Fig. 5C). This shape fits well to the conic equation (Ax² + Bxy + Cy² + Dx + Ey + F = 0). Based on the geometry and the least squares principle, the eccentricity of the cornea can then be calculated.

    Measurement of guinea pig pupil and corneal size using the Auto Refractometer

    The guinea pig was tranquilized, kept quiet and fixed in the detection position of the Auto Refractometer (KR-800, Topcon, Japan). Another operator operates the Auto Refractometer and switches the instrument mode to pupil mode. The parameters and position of the instrument were adjusted so that the image of the guinea pig’s eye was centered in the viewfinder frame, and then photographed after the image had reached its clearest state. After taking the picture, the measuring scale in the image is carefully aligned with the edge of the guinea pig’s pupil or cornea, and then the Auto Refractometer will automatically measure the size of the guinea pig’s pupil or cornea.

    Measurement of guinea pig pupil and corneal size using OCT

    The guinea pigs were tranquilized by an operator, kept quiet, and immobilized in the detection position of the OCT device (YG-20 W MAX, TowardPi, China). Ensure that it does not move during the inspection process to maintain the stability of the inspection environment. The other operator operates the OCT device, sets the detection mode to anterior segment mode, and installs the anterior segment adapter. The parameters of the device are adjusted so that the guinea pig eye image is centered and in high definition. Images in which the guinea pig pupil and corneal structures were fully exposed were defined as standard images. After image acquisition, the corneal limbus (corneo-scleral transition zone), the pupil margin, and the geometric center were marked manually using the accompanying analysis software, and the values of each parameter were calculated. To ensure the reliability of the data, each measurement was repeated three times by two independent operators, and the mean values were taken for subsequent analysis.

    Statistical analysis

    The data were imported into Python 3.9 for statistical analysis and graphing. All data in this study, unless otherwise stated, are mean ± standard deviation. ANOVA was used to compare pupil size at different light intensities, and paired-samples t-tests were used to compare differences between Python and OCT measurements. Differences between any two parameters were defined as significant at p < 0.05 and highly significant at p < 0.001.

    Continue Reading

  • An independent internet | Special Report

    An independent internet | Special Report

    his Independence Day, NADRA was slated to launch its first dematerialised ID card, designed to operate through a mobile application, eventually replacing the physical CNICs the agency currently issues. At the time of writing this, the dematerialised card has not been launched but it will undoubtedly be seen as part of Pakistan’s march towards a technology-dominated future envisioned under the recently passed Digital Nation Pakistan Act. The sentiment is echoed in Pakistan’s recently finalised National Artificial Intelligence Policy, passed two weeks ago.

    The lofty technological aspirations stand in sharp relief with what many people in the country are experiencing. At the start of this month, citing security concerns, the government announced that all mobile internet services would be suspended in Balochistan till August 31. Blanket internet shutdowns, often imposed without much explanation, are common in the province. This shutdown, juxtaposed with the grandiose statements made in the context of the digital nation, positioning technology as a boon and the state as a benefactor of technological development, is enough to promote scepticism. The visions of a digital nation seem to come with a fine print that excludes large chunks of the population.

    The story of the internet in Pakistan runs alongside the story of Pakistan, marked by inequality and severe restrictions. Civic spaces are shrinking in parallel, online and offline. These are undercut by draconian laws, an expansive surveillance apparatus and violence. Entire social media platforms have been blocked for months and years. Latest monitoring tech is routinely acquired to ratchet up surveillance in the country. Once a burgeoning space for thought and innovation, now anyone posting an opinion online is looking over their shoulder. For all the grandstanding of a “digital nation,” Pakistan has consistently ranked as not free on internet freedoms indices and is backsliding.

    The promise of technology for openness, independence and inclusion for women has been overshadowed by the cost patriarchy imposes for existing online. This year alone, many murders in the name of so-called honour have occurred. Violence is wielded as a disciplinary tool to punish women and gender minorities for their online visibility. This is reflected in the state’s cynical use of outdated notions of morality and decency to police online spaces and ban entire platforms, culminating in an experience of women and gender-diverse people in online spaces and beyond that mirrors their experience elsewhere, marked by moral policing, violence and patriarchal anxiety.

    The cracks in the digital nation are the same as in the nation. Grandiose speeches, narratives and technological feats do little to mask the exclusion and violence. Imposing digital nationhood onto a nation still struggling with getting the basics of nationhood right merely puts a band aid on a festering wound. Unless we address the structural and systemic issues in the country, we cannot march towards a technological future—technology will only reflect these problems; in some cases, it might amplify those.

    Pakistan is not alone in the challenges it faces with technology. We stand at the cusp of major transformations induced by artificial intelligence and its widespread application with few guardrails. We are all ill-prepared for these changes. In a country already teetering under economic pressures, violence, social fissures and existential questions of nationhood, such ill preparedness could mean technology exacerbating these problems.

    However, one must not surrender to despair. The moment requires radically different structures and approaches. We must resist falling into the trappings of narratives sold to us about technologies by big-tech companies, whose vision of a tech utopia has had dystopian consequences, accelerating our march towards climate, economic and societal crises. Pakistan’s new AI policy regurgitates techno-speak borrowed from Silicon Valley, utterly failing to chart out an independent vision of AI that speaks to local values and challenges. The policy sees AI as “a unique opportunity to harness digital disruption by educating an eager young population that can potentially propel the nation onto a growth trajectory to sustain our future national competitiveness and improve the lives of citizens.” It frames technology as a silver bullet, the magic stick that will gloss over decades of structural neglect, and looks at its young population only as a potential workforce without addressing the political, social and economic alienation it currently faces.

    Just as technology alone is not responsible for the litany of problems we face, the current framework of positioning technology as the panacea will not work. As we think about the future of the internet on Independence Day, we must reposition these tools and spaces with true independence, free of the strictures placed by governments through the narrow prism of economic gain and binaries imposed by tech companies blinded by profit margins. Technology, grounded in ideas beyond these narrow viewpoints, can serve the spirit of independence.


    The writer is a researcher and campaigner on human and digital rights issues.

    Continue Reading

  • Intelligent generation and optimization of resources in music teaching reform based on artificial intelligence and deep learning

    Intelligent generation and optimization of resources in music teaching reform based on artificial intelligence and deep learning

    Datasets collection

    To comprehensively verify the effectiveness and universality of the proposed algorithm, this study adopts two large-scale public MIDI datasets. First, the LAKH MIDI v0.1 dataset (https://colinraffel.com/projects/lmd/) is used as the main training data. It is a large-scale dataset containing over 170,000 MIDI files, and its rich melody, harmony, and rhythm materials provide a solid foundation for the model to learn music rules. In the data preprocessing stage, this study filters out piano music MIDI files from the LMD dataset because their melodic structures are clear, making them suitable as basic training materials for melody generation models.

    However, to address the limitation of the LAKH MIDI dataset being dominated by piano music and test the model’s universality across a wider range of instruments, this study introduces a second multi-instrument dataset for supplementary verification. It adopts the MuseScore dataset (https://opendatalab.com/OpenDataLab/MuseScore), which is a large-scale, high-quality dataset collected from the online sheet music community MuseScore. The core advantage of this dataset lies in its great instrumental diversity, covering sheet music from classical orchestral music to modern band instruments (such as guitar, bass, drums) and various solo instruments. This provides an ideal platform for testing the model’s ability to generate non-piano melodies.

    For both datasets, a unified preprocessing workflow is implemented:

    1) Instrument Track Filtering: For the LAKH MIDI dataset, this study uses its metadata to filter out MIDI files with piano as the primary instrument. For the MuseScore dataset, which contains complex ensemble arrangements, it applies heuristic rules to extract melodic tracks: prioritizing tracks in MIDI files where the instrument ID belongs to melodic instruments (such as violin, flute, saxophone, etc.) and which have the largest number of notes and the widest range.

    2) Note Information Extraction: This study uses Python’s mido library to parse each MIDI file. From the filtered tracks, it extracts four core attributes of each note: Pitch (i.e., MIDI note number, 0-127); Velocity (0-127); Start Time (in ticks); and Duration (in ticks).

    3) Time Quantization and Serialization: To standardize rhythm information, it quantizes the start time and duration of notes to a 16th-note precision. This means discretizing the continuous time axis into a grid with 16th notes as the smallest unit, where all note events are aligned to the nearest grid point. All note events are strictly sorted by their quantized start time to form a time sequence.

    4) Feature Engineering and Normalization: To eliminate mode differences, each melody is transposed to C major or a minor, allowing the model to focus on learning relative interval relationships rather than absolute pitches. Finally, each note event is encoded into a numerical vector. A typical vector might include: [normalized pitch, quantized duration, interval time from the previous note]. The sequence formed by these vectors serves as the final input to the model.

    5) Data Splitting: All preprocessed sequence data are strictly divided into training and test sets in an 80%/20% ratio.

    Experimental environment and parameters setting

    To ensure the efficiency of the experiment and the reliability of the results, this paper has carefully designed the experimental environment and parameter settings. The experiment uses a high-performance computing cluster equipped with NVIDIA Tesla V100 GPUs to accelerate the model training process. This GPU has strong parallel computing capabilities and can effectively handle the computational burden brought by large-scale datasets. The model training is implemented based on the TensorFlow 2.0 framework, and Keras is used to construct and optimize the neural network structure. Due to the large scale of the LMD dataset, the training process requires a large amount of time and computational resources. Therefore, multi-GPU parallel computing technology is adopted in the experiment. Through multi-GPU parallel computing, the training time can be significantly shortened, and the experimental efficiency can be improved. In addition, the hyperparameters of the model have been carefully adjusted and optimized in the experiment to ensure that the model can achieve the best performance during the training process. Table 2 displays the parameter settings.

    Table 2 Experimental parameter setting.

    In hyperparameter tuning, a combination of Grid Search and manual fine-tuning based on validation set performance is adopted. The tuning method involved dividing 10% of the training set into a Validation Set, with the selection of all hyperparameters ultimately judged by the model’s F1 score on this validation set.

    Search space and selection reasons for key hyperparameters:

    1) Learning Rate: Searched within the range of [1e-3, 1e-4, 5e-5]. Experiments showed that a learning rate of 1e-3 led to unstable training with severe oscillations in the loss function, while 5e-5 resulted in excessively slow convergence. The final choice of 1e-4 achieved the best balance between convergence speed and stability.

    2) Batch Size: Tested three options: [32, 64, 128]. A batch size of 128, though the fastest in training, showed slightly decreased performance on the validation set, possibly getting stuck in a poor local optimum. A batch size of 64 achieved the optimal balance between computational efficiency and model performance.

    3) Number of LSTM Layers: Tested 1-layer and 2-layer LSTM networks. Results indicated that increasing to 2 layers did not bring significant performance improvement but instead increased computational costs and the risk of overfitting.

    4) Number of Neurons: Tested hidden layer neuron counts in [128, 256, 512]. 256 neurons proved sufficient to capture complex dependencies in melodic sequences, while 512 neurons showed slight signs of overfitting.

    5) Reward Function Weights: Tested weight ratios of artistic/technical aspects in [0.5/0.5, 0.7/0.3, 0.9/0.1]. Through subjective listening evaluation of generated samples, the ratio of 0.7/0.3 was deemed to best balance the melodic pleasantness and technical rationality.

    Performance evaluation

    To evaluate the performance of the constructed model, the proposed AC-MGME model algorithm is compared with the model algorithm proposed by DQN, MuseNet32, DDPG33 and Abouelyazid (2023), and the Accuracy, F1-score, and melody generation time are evaluated. The results in LAKH MIDI v0.1 dataset are shown in Figs. 4, 5 and 6.

    Fig. 4

    Accuracy results for music melody prediction by various algorithms in the LAKH MIDI v0.1 dataset.

    Fig. 5
    figure 5

    F1-score results for music melody prediction by various algorithms in the LAKH MIDI v0.1 dataset.

    In Figs. 4 and 5, it can be found that on the LAKH MIDI dataset, the proposed AC-MGME model algorithm achieves the highest scores in both key indicators: accuracy (95.95%) and F1 score (91.02%). From the perspective of the learning process, although the Transformer-based State-of-the-Art (SOTA) model MuseNet shows strong competitiveness in the early stage of training, the AC-MGME model, relying on its efficient reinforcement learning framework, demonstrated greater optimization potential and successfully surpassed MuseNet in the later stage of training. This not only proves the superiority of its final results but also reflects its excellent learning efficiency. At the same time, AC-MGME maintained a leading position in all stages compared with other reinforcement learning-based comparison models (such as DDPG, DQN, etc.).

    To more rigorously verify whether the leading advantage of the AC-MGME model in accuracy is statistically significant, a two-sample t-test is conducted on the results of each model in the final training epoch (Epoch 100). The significance level (α) adopted is 0.05, that is, when the p-value is less than 0.05, the performance difference between the two models is considered statistically significant, as shown in Table 3.

    Table 3 Results of statistical significance test (t-test) for the final round (Epoch 100) accuracy and F1 score of AC-MGME model and each comparison model on LAKH MIDI v0.1 dataset.

    In Table 3, the test results clearly demonstrate that the performance advantage of the AC-MGME model over all comparison models in terms of the key accuracy indicator is statistically significant. Specifically, even when compared with the powerful benchmark model MuseNet, its p-value (0.021) is far below the 0.05 significance threshold, and the differences from models such as DDPG and DQN are even more pronounced (p < 0.001). This conclusion is further confirmed by the F1 score, which more comprehensively reflects the model’s precision and recall. AC-MGME is also significantly superior to all comparison models in terms of F1 score (all p-values are less than 0.05). Overall, these statistical test results fundamentally rule out the possibility that the observed performance differences are caused by random factors. It provides solid and quantitative statistical evidence for the core assertion that the proposed AC-MGME model exhibits strong performance in both generation accuracy and comprehensive performance.

    Fig. 6
    figure 6

    The comparison result chart of music melody generation time by each algorithm in LAKH MIDI v0.1 dataset.

    Figure 6 illustrates how the developed AC-MGME model outperforms previous contrast models in terms of melody creation time efficiency. From the figure, the generation time of AC-MGME decreases steadily with the progress of training, reaching the lowest value among all models at the 100th epoch, which is only 2.69 s. In sharp contrast, the Transformer-based SOTA model MuseNet maintains an inference time of over 6.2 s, highlighting the limitations of large-scale models in real-time applications. Meanwhile, the efficiency of AC-MGME is also significantly superior to all other reinforcement learning-based comparison models.

    To further verify the superiority of the AC-MGME model in computational efficiency from a statistical perspective, a two-sample t-test is similarly conducted on the melody generation time of each model at the final epoch (Epoch 100), as shown in Table 4.

    Table 4 Results table of statistical significance test (t-test) between AC-MGME model and each comparison model in the final round (Epoch 100) melody generation time.

    In Table 4, in comparisons with all contrast models (including the heavyweight MuseNet and other reinforcement learning models), the p-values are all far less than 0.001. This extremely low p-value indicates that the shorter generation time exhibited by the AC-MGME model is not a random fluctuation in the experiment, but a significant advantage with high statistical significance. This finding provides decisive statistical evidence for the applicability of the model in real-time personalized music teaching applications that require rapid feedback.

    To verify the generalization ability of the AC-MGME model in more complex musical environments, the final accuracy rates on the MuseScore dataset are compared, as shown in Fig. 7.

    Fig. 7
    figure 7

    Accuracy results for music melody prediction by various algorithms in the MuseScore dataset.

    In Fig. 7, due to the significantly greater complexity and diversity of the MuseScore dataset in terms of instrument types and musical styles compared to the LAKH dataset, there is a universal decline in the accuracy of all models, which precisely reflects the challenging nature of this testing task. Nevertheless, the AC-MGME model once again demonstrats its strong learning ability and robustness, topping the list with an accuracy rate of 90.15% in the final epoch. It is particularly noteworthy that, in the face of complex musical data, the advantages of AC-MGME over other reinforcement learning models (such as DDPG and DQN) are further amplified. It successfully surpasses the powerful SOTA model MuseNet in the later stages of training. This result strongly proves that the design of the AC-MGME model is not overfitted to a single type of piano music, but possesses the core ability to migrate and generalize to a wider and more diverse multi-instrument environment, laying a solid foundation for its application in real and variable music education scenarios.

    To verify whether the generalization ability of the AC-MGME model across a wider range of instruments is statistically significant, a two-sample t-test is similarly conducted on the accuracy results of each model at the final epoch (Epoch 100) on the MuseScore dataset, as shown in Table 5.

    Table 5 AC-MGMEThe statistical significance test (t-test) results of the final accuracy of the model and the comparison model on the musescore dataset.

    In Table 5, the test results indicate that the performance advantage of the AC-MGME model is statistically significant. Even in comparison with its strongest competitor, MuseNet, its p-value (0.042) is below the 0.05 significance level. While the differences from models such as Abouelyazid (2023), DDPG, and DQN are even more pronounced (p < 0.001). This strongly proves that the leading position of this model on diverse, multi-instrument datasets is not accidental. More importantly, this conclusion fundamentally confirms the robustness and generality of the AC-MGME framework, indicating that it is not limited to the generation of single piano melodies but can effectively learn and adapt to the melodic characteristics of a wider range of instruments, thus having application potential in more diverse music education scenarios.

    To evaluate the deployment potential of the model in real teaching scenarios, a dedicated test on inference performance and hardware resource consumption is conducted. The model’s performance is assessed not only on high-performance servers but also deploys on a typical low-power edge computing device (NVIDIA Jetson Nano) to simulate its operation on classroom tablets or dedicated teaching hardware. The comparison of inference performance and resource consumption of each model on high-performance GPUs and edge devices is shown in Fig. 8.

    Fig. 8
    figure 8

    Comparison table of reasoning performance and resource occupation of each model on high-performance GPU and edge devices.

    In Fig. 8, an analysis of the inference performance and resource consumption test reveals the significant advantages of the proposed AC-MGME model in practical deployment. In the high-performance GPU (NVIDIA Tesla V100) environment, AC-MGME not only demonstrated the fastest inference speed (15.8 milliseconds) but also had a GPU memory footprint (350 MB) far lower than all comparison models. Particularly when compared with the heavyweight Transformer model MuseNet (2850 MB), it highlighted the advantages of its lightweight architecture. More crucially, in the test on the low-power edge device (NVIDIA Jetson Nano) simulating real teaching scenarios, the average inference latency of AC-MGME was only 280.5 milliseconds, fully meeting the requirements of real-time interactive applications.

    Two objective indicators, namely Pitch Distribution Entropy and Rhythmic Pattern Diversity, are further introduced to quantify the musical diversity and novelty of the generated melodies. This helps evaluate whether the model can generate non-monotonous and creative musical content. Among them, Pitch Distribution Entropy measures the richness of pitch usage in a melody. A higher entropy value indicates that the pitches used in the melody are more uneven and unpredictable, usually implying higher novelty. Rhythmic Pattern Diversity calculates the unique number of different rhythmic patterns (in the form of n-grams) in the melody. A higher value indicates richer variations in the rhythm of the melody. The comparison results and statistical analysis of the objective musicality indicators of the melodies generated by each model are shown in Table 6.

    Table 6 Comparison and statistical significance test (t-test) results of pitch distribution entropy and rhythm diversity between AC-MGME model and each comparison model on musescore dataset.

    Table 6 reveals the in-depth characteristics of each model in terms of musical creativity, and its results provide more inspiring insights beyond the single accuracy indicator. As expected, MuseNet, as a large-scale generative model, obtains the highest scores in both Pitch Distribution Entropy and Rhythmic Pattern Diversity, and statistical tests shows that its leading advantage is significant (p < 0.05), which proves its strong ability in content generation and innovation. However, a more crucial finding is that the AC-MGME model proposed in this study not only demonstrates highly competitive diversity but also significantly outperforms all other reinforcement learning-based comparison models in both indicators (p < 0.01). This series of results accurately indicates that the AC-MGME model proposed in this paper does not pursue unconstrained and maximized novelty, but rather achieves much higher musical diversity and creativity than similar DRL models on the premise of ensuring the rationality of musical structures. This good balance between “controllability” and “creativity” is an important reason why it obtained high scores in subsequent subjective evaluations, especially in “teaching applicability”.

    To evaluate the subjective artistic quality and educational value that cannot be captured by technical indicators, a double-blind perception study is conducted. 30 music major students and 10 senior music teachers with more than 5 years of teaching experience are invited as expert reviewers. The reviewers score the melody segments generated by each model anonymously on a 1–5 scale (higher scores indicate better performance) without knowing the source of the melodies. The user feedback results under the proposed model algorithm are further analyzed, including the scores (1–5 points) in three aspects: the use experience, the learning effect and the quality of the generated melody. The comparison results with traditional music teaching and learning are shown in Fig. 9.

    Fig. 9
    figure 9

    Comparison chart of user feedback results.

    In Fig. 9, according to the feedback from users, the satisfaction of AC-MGME model is higher than that of traditional music teaching. Especially in the aspect of melody quality, AC-MGME gets a high evaluation of 4.9 points, which is significantly better than the traditional teaching of 3.7 points. In addition, AC-MGME also performs well in terms of experience and learning effect, with scores of 4.8 and 4.6 respectively, far exceeding the scores of 3.6 and 3.9 in traditional teaching. This shows that AC-MGME model not only improves the learning effect and student experience, but also provides higher quality results in melody creation.

    The expert evaluation results of subjective quality of melodies generated by each model are shown in Table 7, and the statistical analysis results are shown in Table 8.

    Table 7 Subjective quality expert evaluation results of melodies generated by each model.
    Table 8 Statistical significance test (t-test) results of subjective evaluation between AC-MGME model and each comparison model.

    The results of Tables 7 and 8 show that, in the dimension of artistic innovation, MuseNet achieves the highest score with its strong generative capability, and its leading advantage is statistically significant (p = 0.008), which is completely consistent with the conclusion of the objective musicality indicators. However, in terms of melodic fluency, AC-MGME won with a slight but statistically significant advantage (p = 0.041), and expert comments generally considered its melodies to be “more in line with musical grammar and more natural to the ear”. The most crucial finding comes from the core dimension of teaching applicability, where the AC-MGME model obtained an overwhelming highest score (4.80), and its advantage over all models including MuseNet is highly statistically significant (p < 0.001). The participating teachers pointed out that the melodies generated by AC-MGME are not only pleasant to listen to, but more importantly, “contain clear phrase structures and targeted technical difficulties, making them very suitable as practice pieces or teaching examples for students”. This series of findings strongly proves that while pursuing technical excellence, this model more accurately meets the actual needs of music education, and can generate educational resources that combine artistic quality and practical value. This is a unique advantage that cannot be matched by models that simply pursue novelty or accuracy.

    Discussion

    The results of this study clearly demonstrate the comprehensive advantages of the AC-MGME model across multiple dimensions. In terms of objective performance, the model not only outperforms all comparison benchmarks, including state-of-the-art models, in accuracy and F1 score, but also confirms the reliability of this advantage through strict statistical significance tests (p < 0.05). More importantly, in the subjective quality evaluation, AC-MGME achieved an overwhelming highest score in “teaching applicability”, indicating that it does not simply pursue technical indicators, but precisely meets the core needs of music education—generating musical content that combines structural rationality, artistic fluency, and teaching practical value. In addition, through deployment tests on low-power edge devices, this study is the first to empirically prove that while ensuring high-quality generation, the model has great potential for efficient and low-latency deployment in real classroom environments, laying a solid foundation for its transition from theory to application.

    This study indicates that the proposed AC-MGME model exhibits significant performance in terms of melody generation quality, learning effectiveness, and user experience. In terms of melody generation quality, the AC-MGME model scores 4.9/5, which is higher than that of traditional music teaching, demonstrating its strong ability to generate melodies with both artistic and technical merits. Meanwhile, AC-MGME also performs well in learning effectiveness, with a score of 4.6/5, which is higher than traditional teaching, proving its effectiveness in generating personalized learning paths and improving students’ skills. In terms of user experience, AC-MGME scores 4.8/5, also higher than traditional teaching (3.6/5), further verifying the advantages of the interactive and convenient teaching system based on DRL. This is consistent with the findings of Dadman et al. (2024)34 and Udekwe et al. (2024)35. Particularly in terms of generation time, AC-MGME only takes 2.69 s to generate a melody, while other models such as DQN require 8.54 s. AC-MGME not only improves the generation quality but also significantly enhances the generation efficiency. In addition, the model performs excellently in terms of generation quality (with an accuracy rate of 95.95% and an F1 score of 91.02% on the LAKH MIDI dataset), and its generation quality is higher than that of other tested models, supporting the feasibility of real-time applications. This is consistent with the research of Chen et al. (2024)36.

    Therefore, the proposed model algorithm can efficiently generate melody and provide personalized learning experience. By dynamically adjusting the melody generation strategy, AC-MGME can optimize the generated content in real time according to students’ different needs and learning progress, which greatly improves the intelligence and personalization level of music education and provides valuable practical basis for the development of AI-driven music education tools in the future.

    However, while affirming these achievements, people must carefully recognize the limitations and potential biases in the research. Firstly, in terms of datasets, although the introduction of the MuseScore dataset has greatly expanded the diversity of instruments, the content of both datasets still mainly focuses on Western tonal music. This may lead to poor performance of the model when generating non-Western music or modern atonal music, resulting in an “over-representation” bias in a broader cultural context. Secondly, the size of the user sample is also a limiting factor. Although the expert review panel composed of 40 music professionals has provided valuable in-depth insights, this scale is not sufficient to fully represent the diverse perspectives of global music educators and learners. Therefore, although the results of this study are robust within the test framework, caution is still needed when generalizing them to all music cultures and educational systems, and more localized verifications should be conducted.

    Finally, the application of such AI technologies in the field of education will inevitably raise ethical issues that require serious attention. A core concern is the potential risk of abuse, particularly music plagiarism. The model may learn and reproduce copyrighted melody segments during training, thereby triggering intellectual property issues. To mitigate this risk, future system iterations must integrate plagiarism detection algorithms, for example, by comparing generated content with n-gram sequences in the training set, and design corresponding reward mechanisms to encourage originality. Another equally important ethical issue is the privacy and security of student data. While tracking and analyzing students’ practice data can enable personalized teaching, it also involves sensitive personal performance information. To address this, strict data management strategies must be adopted, including anonymizing and aggregating all data, ensuring system design complies with relevant regulations such as the General Data Protection Regulation (GDPR), and fully disclosing the content, purpose, and usage of data collection to students, parents, and teachers. These measures aim to build a trustworthy and responsible intelligent music education ecosystem.

    Continue Reading

  • ChatGPT boss warns against relying on AI as primary source of information: Here’s why – Mint

    1. ChatGPT boss warns against relying on AI as primary source of information: Here’s why  Mint
    2. Women with AI ‘boyfriends’ mourn lost love after ‘cold’ ChatGPT upgrade  Al Jazeera
    3. Did the system update ruin your boyfriend? Love in a time of ChatGPT | Arwa Mahdawi  The Guardian
    4. Is AI hitting a wall?  Financial Times
    5. GPT-5’s Voice Mode Can Hold a Decent Conversation, but Please Don’t Talk to ChatGPT in Public  CNET

    Continue Reading

  • How the US, UK and EU approach risk

    How the US, UK and EU approach risk

    Imagine logging into your bank account one morning and finding everything frozen—cards declined, standing orders stopped and your savings untouchable. No fraud alert, no bounced cheque. Just a brief message: “We are closing your account. Please make alternative arrangements.”

    This is not a rare nightmare. Around the world, more people and businesses are being “de-banked”—cut off from basic banking services.

    In the financial industry, the practice is called “de-risking” or when banks sever ties with clients or even whole sectors to avoid regulatory or reputational risk.

    While it might sound like a niche compliance issue, in reality, it sits at the intersection of financial crime prevention, political rights, trade flows and everyday access to money—and the UK, US and EU are taking sharply different approaches to it.

    Earlier this month, US President Donald Trump signed an executive order aimed at preventing banks from denying services based on political or religious beliefs. The order bans the use of “reputational risk” as a justification for closing accounts and directs banking regulators to review practices within 180 days.

    Supporters say the move protects freedom of political expression and stops discrimination against conservatives, who claim they have been disproportionately targeted.

    Critics warn it could force banks to keep serving clients engaged in activities that create genuine financial crime or security risks.

    As with many issues Trump is passionate about, the topic of de-banking in the US was spurred by his personal experiences. He repeatedly accused JPMorgan Chase and Bank of America of refusing his business after his first term as president because of his and his supporters’ conservative views.

    He claims JPMorgan gave him 20 days to close his account and that Bank of America refused a large deposit even though both banks have denied politically motivated action.

    Related

    Another high-profile case was that of the National Council for Religious Freedom (NCRF), an organization founded in 2022 that explicitly backs politicians who support combining politics with religion and vote against bills such as the Equality Act, which prohibits discrimination on the basis of sex, gender identity and sexual orientation, “because it prohibits religious freedoms.”

    Former Kansas governor Sam Brownback, the founder of the NCRF, claimed he had been unfairly de-banked in the US. – AP Photo

    Groups like these, especially if they rise to national prominence quickly and start depositing large sums into their accounts without providing sufficient background or donor transparency, can trigger automatic responses from banks worried about compliance with anti-money laundering regulation and are subject to enhanced monitoring.

    So when NCRF’s accounts at JPMorgan Chase were suspended, it was probably not based on their clients political beliefs. Banks are profit-maximising institutions who aim to serve a wide yet reliable client base—drawing political attention to their work is the stuff of literal nightmares for them, especially banking behemoths like JPMorgan Chase.

    In a letter, the bank said the closure was due to incomplete compliance documentation—not religious or political reasons.

    Yet the NCRF used this decision to decry “woke capitalism” and launch a national campaign in the US to limit decisions, including reputational risk, and focus solely on quantifiable risks like credit, operational or compliance issues.

    The new executive order is cause for headaches for bankers. In practice, lenders may have to review thousands of past account closures, document decisions more extensively and possibly reinstate customers they previously cut off.

    Related

    In Britain, the debate was turbo-charged by the 2023 Nigel Farage–Coutts affair. When the high-end bank closed the Brexit campaigner’s account, internal documents later revealed the decision factored in his political views. The row became front-page news, prompting government promises to strengthen transparency.

    From a compliance and commercial standpoint, there are reasons why Coutts’ decision may have been well within the norms of risk management. Farage’s status as a politician makes him a Politically Exposed Person or PEP under anti–money laundering rules.

    UK banks are required to apply enhanced due diligence to PEPs, including detailed checks on sources of wealth, closer transaction monitoring and ongoing reassessment of any potential links to corruption or financial crime. That doesn’t imply wrongdoing—but it does mean the account demands more resources and carries a higher regulatory burden. For a bank whose value proposition is built on discreet, low-risk relationships, this can tip the cost-benefit balance.

    Reports at the time suggested that Farage’s account had fallen below Coutts’ minimum financial thresholds for certain services. When a client no longer meets profitability benchmarks, but still demands high levels of compliance oversight and carries reputational sensitivities, a private bank has strong incentives to part ways.

    In that light, Coutts’ choice looks less like a political purge and more like a calculated alignment of its client book with its risk appetite and commercial strategy.

    However, that was not the angle that dominated the headlines, and it ended up shaping de-risking and de-banking policy in a significant way in the UK.

    In 2024, complaints to the Financial Ombudsman Service about account closures rose 44% to nearly 3,900, with a higher proportion upheld in favour of consumers. Meanwhile, over 140,000 business accounts were closed in 2023—raising concerns, especially for small businesses and non‑profits.

    Since then, UK banks must give customers at least 90 days notice before closure and provide more detail on why accounts are terminated. The conversation is still dominated by high-profile, politically sensitive cases—rather than the wider economic and trade implications of de-risking.

    The European Central Bank stands next the buildings of the banking district in Frankfurt, Germany. September 2019.
    The European Central Bank stands next the buildings of the banking district in Frankfurt, Germany. September 2019. – AP Photo

    By contrast, Brussels has treated de-risking as a long-standing, largely technical policy challenge. For years, EU institutions have issued guidance to safeguard financial inclusion while enforcing anti–money laundering and counter–terrorism financing (AML/CFT) rules.

    European Banking Federation (EBF) member banks often find themselves caught between a rock and a hard place: they must comply with stringent AML/CFT requirements—they are required to end relationships with their riskiest clients—yet they are requested to ensure access to basic banking services for legitimate customers,” the European Banking Federation told Euronews in a statement.

    “Hence their de-risking decisions should remain proportionate and risk-based, not indiscriminate bans on entire countries or customer groups,” they continued.

    According to the EBF, most banks in Europe focus on individual, case-by-case de-risking and pay particular attention to “red flags”. For example, situations where a customer’s identity cannot be verified using secure, government-approved ID checks, or any transaction in which they cannot confidently confirm who the person or company really are or who the “beneficial owner” is.

    Related

    For member banks, it is a matter of weighing whether the risks can be reduced enough to comply with regulations and protect the bank’s reputation, and whether managing that risk would require more time, money, and effort than the account is ultimately worth.

    “In the EU, de-risking is increasingly recognised as a significant consumer issue, though it is neither a new concern nor one that fully mirrors the priorities of the Trump Administration,” the EBF statement continues.

    “For years, EU institutions—most notably the European Banking Authority—have issued guidance aimed at safeguarding financial inclusion and ensuring that legitimate customers are not unfairly excluded from the banking system.”

    Continue Reading

  • What is de-banking? How EU, US & UK banks screen their risky customers – Euronews.com

    1. What is de-banking? How EU, US & UK banks screen their risky customers  Euronews.com
    2. Trump’s debanking order could create headaches for banks, sources say  Reuters
    3. Trump order zeroes in on payments  Payments Dive
    4. President Trump plays the victim card over and over again | ELAINE HARRIS SPEARMAN  Gadsden Times
    5. Trump Says He’s Cracking Down On America’s Biggest Banks: Could It Impact You?  Nasdaq

    Continue Reading

  • Bernstein Reiterates Outperform on TSMC (TSM), Citing Strategic Chip Role

    Bernstein Reiterates Outperform on TSMC (TSM), Citing Strategic Chip Role

    Taiwan Semiconductor Manufacturing Company Ltd (NYSE:TSM) is one of the best growth stocks to buy according to analysts. On August 15, Bernstein reiterated its Outperform rating on Taiwan Semiconductor Manufacturing Company Ltd (NYSE:TSM), with a price target of $249.00. That represents an implied upside of only 4.1% from the current price of $239.1.

    Bernstein Reiterates Outperform on TSMC (TSM), Citing Strategic Chip Role

    A close-up of a complex network of integrated circuits used in logic semiconductors.

    However, Bernstein cited the Taiwanese company’s strategic importance in the global semiconductor landscape, emphasizing that it accounts for roughly 15–25% of the worldwide wafer fab equipment (WFE) market. That is massive, especially considering the whole of  China’s share of 30–40%.

    Bernstein also cited that a significant share of the company’s capital expenditures is allocated beyond just traditional wafer fabrication. According to the firm, TSMC is increasingly investing higher amounts in infrastructure, packaging, and testing technologies. These are critical components in advanced chip production.

    Last month, the company delivered a stellar Q2 2025 performance, with revenue climbed to NT$933.79 billion (US $30.07 billion), up 38.6 % year-over-year; while net income reached NT$398.27 billion (about US $13.53 billion), and diluted EPS came in at NT$15.36 (US $2.47 per ADR), a whopping 60.7 % jump year-over-year.

    While we acknowledge the potential of TSM as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.

    READ NEXT: 10 Best Military Tech Stocks to Buy Now and 10 High Growth Stocks Outside Tech Analysts Are Bullish On.

    Disclosure: None.

    Continue Reading

  • Novo Nordisk A/S (NVO) Delivers 18% Sales Growth in H1 2025

    Novo Nordisk A/S (NVO) Delivers 18% Sales Growth in H1 2025

    Novo Nordisk A/S (NYSE:NVO) is one of the Reddit Stocks with the Highest Upside Potential. On August 6, the company stated that it delivered 18% sales growth in H1 2025. The company has lowered its full-year outlook because of lower growth expectations for its GLP-1 treatments in H2 2025. Therefore, it has been taking measures to sharpen its commercial execution further, and ensure efficiencies in its cost base while continuing to make investments in future growth. With over 1 billion people living with obesity globally, which includes over 100 million living in the US, and only a few million on treatment, Novo Nordisk A/S (NYSE:NVO) expects to maximise the strong growth opportunities, thanks to the healthy product portfolio and future pipeline.

    Novo Nordisk A/S (NVO) Delivers 18% Sales Growth in H1 2025

    A closeup shot of a laboratory technician handling a medical device used for fertility treatments.

    Novo Nordisk A/S (NYSE:NVO) highlighted that the sales within Diabetes and Obesity care rose 16% in Danish kroner to DKK 145.4 billion (and 18% at CER), mainly due to the Obesity care growth of 56% in Danish kroner to DKK 38.8 billion (and 58% at CER) and GLP-1 diabetes sales increasing 8% in Danish kroner (10% at CER). Within R&D, Novo Nordisk A/S (NYSE:NVO) plans to advance subcutaneous and oral amycretin into phase 3 development in weight management on the basis of completed clinical studies during Q1 2025. For FY 2025, the sales growth is anticipated to be 8% – 14% at CER, and operating profit growth is projected to be 10% – 16% at CER.

    ClearBridge Investments, an investment management company, released its Q1 2025 investor letter. Here is what the fund said:

    “We initiated a new position in Novo Nordisk A/S (NYSE:NVO), the global leader in diabetes care and one of two dominant players in the fast-growing GLP-1 diabetes and obesity drug market. A slowdown in prescriptions for Novo’s GLP-1 drugs, combined with confusion surrounding a clinical trial of its next-generation candidate, CagriSema, caused a significant pullback in the stock. We saw this as a buying opportunity. We believe the GLP-1 market remains vast and that Novo (alongside Eli Lilly) is well-positioned to maintain a duopolistic structure for years to come, given the complexity of manufacturing, differentiated intellectual property and brand strength. We also expect growth to reaccelerate as supply ramps up following its acquisition of Catalent and as regulators crack down on unlicensed compounders.”

    While we acknowledge the potential of NVO as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.

    Continue Reading

  • Optimization of PES-based Hollow fiber membranes incorporating MgO-modified activated carbon via response surface methodology for enhanced pure water permeability

    Optimization of PES-based Hollow fiber membranes incorporating MgO-modified activated carbon via response surface methodology for enhanced pure water permeability

    Morphological and structural characterization of AC and AC-MgO particles

    Figure 3 shows the XRD patterns of activated carbon, pure MgO, and AC-MgO particles. The XRD pattern of pure MgO particles exhibits sharp characteristic peaks at 2θ angles of 36.8°, 42.7°, 62°, 74.2°, and 78.3°, corresponding to the (111), (200), (220), (311), and (222) planes, respectively39. The two broad peaks observed at around 22° and 44° in the XRD patterns of both pure AC and AC-MgO particles are attributed to the (100) and (101) diffraction planes of the carbon structure40. By comparing the diffraction patterns, it is seen that the XRD patterns of pure AC and the cubic crystalline structure of MgO are superimposed in the XRD pattern of the AC-MgO nanocomposite, indicating that the synthesis was successful.

    Fig. 3

    X-ray diffraction patterns of pure MgO, AC, and AC-MgO particles.

    Figure 4a shows the SEM image of activated carbon particles used in this study, indicating particles with a clear, crack-free, and smooth surface. The SEM micrograph and the corresponding EDS analysis from the AC-MgO nanocomposite particle are presented in Figs. 4b–d. As could be seen from Fig. 4b, a significant change in the surface morphology of the AC particles occurred after their modification with MgO nanoparticles. Additionally, the SEM image at higher magnification (Fig. 4c) confirms the presence of MgO nanoparticles on the nanocomposite surface, exhibiting a small spherical morphology and partial overlap with each other. From the EDS analysis results presented in Fig. 4d, it can be observed that the Mg and O elements were uniformly distributed, indicating that the MgO nanoparticles were dispersed well on the activated carbon.

    Fig. 4
    figure 4

    SEM images of (a) AC powder, (b) AC-10 wt% MgO nanocomposites, (c) Fig. (b) at higher magnification, (d) EDS spectroscopy and EDS mapping elements of carbon, magnesium and oxygen of the AC–MgO-10% nanocomposite.

    The microstructure of the AC-MgO nanocomposites was also revealed by TEM and selected area electron diffraction (SAED) images (Fig. 5a, b). The discontinuous ring pattern in the SAED image confirms the polycrystalline nature of MgO nanoparticles. The SAED image shows concentric rings corresponding to the (111), (200), (220), (311), and (222) reflections of MgO particles. HR-TEM imaging (Fig. 5c) reveals lattice fringes with spacings of 0.21 nm, which are attributed to (200). These results align well with the XRD data.

    Fig. 5
    figure 5

    (a) TEM image showing the morphology and dispersion of AC-10 wt% MgO nanocomposites; (b) SAED pattern confirming the crystalline structure of MgO nanoparticles within the composite; (c) HR-TEM image revealing the lattice fringes and detailed nanostructure of the AC-10 wt% MgO nanocomposites.

    Surface analysis of the membranes

    Characterizing the surface properties of ultrafiltration membranes is essential for improving their performance for separation applications. In this study, the effect of adding AC-MgO particles on the surface properties of PES membranes was analyzed in terms of surface roughness and contact angles. As illustrated by the AFM images in Fig. 6, the surface roughness of the composite membranes is higher compared to the pristine PES membrane, with Ra values of 6.94 ± 0.125 nm and 16.59 ± 0.487 nm for the pristine PES and PES-AC/MgO membranes, respectively. The addition of particles resulted in a rougher surface, which facilitated filtration by increasing both the effective filtration area and pore size27. Figure 7 shows the contact angles of the pure PES membrane and the membrane containing 0.354 wt% AC-MgO. The results confirm an improvement in hydrophilicity after incorporation of AC-MgO particles into the membrane. This improvement resulted from the presence of polar functional groups (–OH and –COOH) on the surface of AC-MgO, which migrated toward the membrane surface during membrane formation and allowed greater interaction with water molecules41,42. Furthermore, as evident from the AFM results, the membrane roughness increased with the addition of particles, which could make the hydrophilic surface even more water-attracting43.

    Fig. 6
    figure 6

    Surface roughness analysis of PES membranes containing (a) 0 wt% and (b) 0.354 wt% AC-MgO particles.

    Fig. 7
    figure 7

    Water contact angle for PES based hollow fiber membranes.

    Response surface methodology

    Results of ANOVA

    Summary of ANOVA results of the RSM model for PWP of the developed membranes is shown in Table 3. An F-value of 1.13 for lack of fit indicates that it is not significant relative to the pure error. Given that at the 95% confidence level, variables with p-values less than 0.05 are generally considered statistically significant44, the model is highly significant due to having a p-value less than 0.0001, while the lack-of-fit with a p-value of 0.47 is not significant. These results confirm that the model adequately fits the experimental data and is appropriate for predicting the response within the studied range of variables.

    Table 3 ANOVA results of RSM model for pure water permeability.

    A predictive second-order polynomial model was established using multiple regression analysis within the RSM framework to quantify the effects of four key factors on the pure water permeability, as expressed by the following equation:

    $$begin{aligned} PWP & = 20.1811 + 4.54 times left( {{text{Dope}}} right) + 5.3 times :left( {{text{Bore}}} right) + 13.85 times left( {{text{Air}}:{text{gap}}} right) + 8.58 times left( {{text{Concentration}}} right) \ & quad + 2.36 times left( {{text{Dope}} times :{text{Bore}}} right) + 2.45 times left( {{text{Dope}} times :{text{Air}}:{text{gap}}} right) + 8.33 times left( {{text{Dope}} times :{text{Concentration}}} right) \ & quad + 5.06 times left( {{text{Bore}} times :{text{Air}}:{text{gap}}} right) + 10.37 times left( {{text{Bore}} times :{text{Concentration}}} right) + 8.00 times left( {{text{Air}}:{text{gap}} times :{text{Concentration}}} right) \ & quad – 0.7296 times left( {{text{Dope}}} right)^{2} – 3.81 times left( {{text{Bore}}} right)^{2} – 6.03 times left( {{text{Air}}:{text{gap}}} right)^{2} + 21.44 times left( {{text{Concentration}}} right)^{2} \ end{aligned}$$

    (2)

    To further evaluate the model’s accuracy, the predicted values are plotted against the experimental data in Fig. 8. As shown in this figure, the data points lie very close to the diagonal line, indicating strong consistency and close agreement between the observed and the predicted values. This alignment indicates the adequacy of the model fit. Additionally, the fit is confirmed through the coefficient of determination ((:{R}^{2})), where a high (:{R}^{2}) value of 0.94 demonstrates a strong correlation between the predicted and observed values. Furthermore, the minimal difference (less than 0.2) between the predicted (:{R}^{2}) ((:{R}_{pre}^{2})) and the adjusted (:{R}^{2}) ((:{R}_{adj}^{2})) confirms the validity of the model. Based on these statistically significant results, it can be confidently concluded that this model is effective in predicting optimal operating conditions.

    Fig. 8
    figure 8

    Comparison between predicted and measured pure water permeability values.

    Figure 9a presents the normal probability plot of residuals for the PWP responses. Residuals represent the difference between the measured and the predicted values. From this figure, the distribution of points closely following a straight line indicates that the residuals are approximately normally distributed. Figure 9b displays the residuals plotted versus the predicted PWP values. In this figure, the studentized residues are randomly scattered, indicating that the residuals are independent of the predicted response. The absence of any discernible pattern further supports the assumption of residual independence. Additionally, the slight fluctuation of residuals around the x-axis confirms the assumption of constant variance. These results collectively demonstrate the predictive accuracy of the model.

    Fig. 9
    figure 9

    (a) Normal probability plots of studentized residuals for pure water permeability (PWP), and (b) variation of residuals versus predicted PWP values.

    From the above results, the model predictions for water permeability are reasonably accurate and reliable. The optimal processing parameters predicted by the model are: AC-MgO concentration of 0.354 wt%, air gap distance of 18.933 cm, dope solution flow rate of 2.981 ml/min, and bore fluid flow rate of 7.775 ml/min. The maximum theoretical water permeability of the hollow fiber membrane under the optimal processing conditions was estimated to be approximately 56.715 L/(m2.h.bar), and the experimentally measured permeability at this condition was found to be 52.49 L/(m2.h.bar). This result suggests that the employed model has good predictive capability of the variables for achieving the optimal response.

    Impact of particle concentration and process parameters on PWP

    Figures 10 and 11a, d, and f illustrate the correlation between the concentration of AC-MgO particle in dope solution and the pure water permeability. As the content of AC-MgO increases, the PWP of membranes increases; however, a slight decrease is observed at low concentrations of particles. From these results, the higher concentration of particles in the fluid has a more pronounced positive effect on PWP. Such improvement is mainly ascribed to the synergistic effects of increased porosity and hydrophilicity due to the added particles45. The relationship between PWP and air gap distance, as shown in Figs. 10 and 11a, c, and e, reveals that permeability increases with the rising air gap, which is attributed to the formation of longer finger-like structures46. Figure 10 and 11b, c, and d also show that PWP increases with increasing dope solution flow rate, which is attributed to the significant formation of large macrovoids47. Additionally, as depicted in Fig. 10 and 11b, e, and f, the PWP increases by increasing the bore fluid flow rate, likely due to the greater thickness of the layer with the finger-like structures48. Further details will be described in the following sections.

    Fig. 10
    figure 10

    Variation of pure water permeability (PWP) with (a) air gap and particle concentration in solution (b) bore fluid rate and dope solution rate, (c) air gap and dope solution rate, (d) particle concentration and dope solution rate, (e) air gap and bore fluid rate, and (f) particle concentration and bore fluid rate (The plots generated using Design-Expert software, version 13.0.5.0 (Stat-Ease, Inc.). For more details, visit https://www.statease.com/software/design-expert/).

    Fig. 11
    figure 11

    Contour plot of pure water permeability variation with (a) particle concentration in solution and air gap, (b) bore fluid rate and dope solution rate, (c) air gap and dope solution rate, (d) particle concentration and dope solution rate, (e) air gap and bore fluid rate, and (f) particle concentration and bore fluid rate (The plots generated using Design-Expert software, version 13.0.5.0 (Stat-Ease, Inc.). For more details, visit https://www.statease.com/software/design-expert/).

    Microstructure analysis of manufactured HFM cross section

    To investigate the effect of the studied parameters on HFM morphology, SEM micrographs were taken from the transverse cross-section of the manufactured membranes. These images reveal that the cross-section of the membranes generally consisted of three distinct layers, with pore morphology varying depending on the process parameters, as discussed below.

    Effect of particle concentration

    The SEM images of the manufactured HFM cross-section with various AC-MgO particle content are shown in Fig. 12. As observed from this figure, the membrane cross-section typically comprised three distinct layers: an outer layer, which is generally dense with minimal porosity; a middle layer characterized by a macro-porous structure that influences permeability49; and an inner layer composed of micro- and meso-porous structures. Additionally, Fig. 12 illustrates that increasing the AC-MgO content led to the formation of larger finger-like pores. These particles acted as nucleation sites for pore formation by providing surfaces where the non-solvent preferentially interacted with the dope solution. The improved hydrophilicity of the dope solution, resulting from the presence of AC-MgO, accelerated the solvent–water exchange rate and facilitated the development of larger pores and an increase in the thickness of the finger-like structured layer12,50. These morphological changes, along with increased hydrophilicity due to the presence of AC-MgO particles, which allowed water molecules to more easily wet the membrane surface and penetrate the pores51, significantly enhanced the PWP of the membrane. These findings are in excellent agreement with the data presented in Figs. 10 and 11.

    Fig. 12
    figure 12

    SEM micrographs of the hollow fiber cross-sections prepared with varying concentrations of AC-MgO particles in the dope solution: (a) 0%, (b) 0.18%, and (c) 0.36% wt.

    Effect of air gap

    The effect of air gap distance on the thickness of the layer with the finger-like structure in the HFM, are presented in Fig. 13. It is shown that by increasing the distance from 2 cm to 12 cm and 42 cm, while keeping other parameters constant, the size of this layer in samples S8, S2, and S1 increased from 30 μm to 43 μm and 60 μm, respectively (Fig. 13a–c). Similarly, the thickness of the finger-like structured layer changed from 38 μm to 60 μm for the samples S15 and S18, respectively (Fig. 13d, e). Moreover, increasing the air gap distance caused greater pore diameters, as the fibers were exposed to air for a longer time before being submerged in the coagulation bath, allowing water vapor to induce phase inversion more extensively52,53. According to the SEM micrograph presented in Fig. 13, the improved PWP of the composite hollow-fiber membranes can be ascribed to the combined effects of longer finger-like structures in the middle layer, and a shorter dense structure on the outer layer compared to membranes fabricated with a shorter air gap6. All these factors contributed to the increase in water permeability through the membrane, indicating a direct relationship between PWP and air gap distance, as is also evident in Figs. 10 and 11.

    Fig. 13
    figure 13

    SEM micrograph of hollow fiber cross-sections for various air gap distance. (a) 2 cm, (b) 22 cm, (c) 42 cm, (d) 12 cm, and (e) 32 cm.

    Effect of dope solution flow rate

    It has been demonstrated that the flow rate of the dope solution during the fabrication of hollow fiber membranes significantly influences their structural morphology. In this study, four different flow rates were used to investigate the effects of dope solution flow rate on the PWP and morphology of hollow fiber membranes. As can be seen, increasing the flow rate from 2 to 3 ml/min for samples S28 and S27, resulted in an increase in the outer diameter of hollow fibers from 473 to 534 μm, and a significant increase in their wall thickness from 34 μm to 90 μm (Fig. 14a, b). Similarly, increasing the dope flow rate from 1.5 ml/min to 2.5 ml/min for samples S12 and S2 resulted in the outer diameter increasing from 539 to 609 μm and the wall thickness rising by 54% from 52 to 80 μm, respectively (Fig. 14c, d). These findings indicate that increasing the polymer dope solution flow rate mainly affects the outer dimension of the hollow fiber, leading to a notable thickening of the fiber wall. These observations are consistent with previous studies on PES hollow fiber membranes47, which reported that higher flow rates produce fibers with thicker walls. Moreover, the outer surface of membranes fabricated with higher dope solution flow rates exhibited the largest pore sizes. This is attributed to the shorter residence time, which limited solvent evaporation into the air. Consequently, phase inversion of the membrane’s outer surface occurred entirely within the external coagulation bath. In this environment, rapid demixing between the solvent and water led to the formation of larger surface pores compared to membranes exposed to longer evaporation-induced phase inversion54. As illustrated in Fig. 14, an increase in the outer diameter of the membrane is also associated with a greater wall thickness and the development of larger finger-like pores. These structural changes contribute to an enhancement in PWP.

    Fig. 14
    figure 14

    Effect of the dope solution flow rate on hollow fiber pore morphology. (a) 2 ml/min, (b) 3 ml/min, (c) 1.5 ml/min, and (d) 2.5 ml/min.

    Effect of bore fluid flow rate

    The effect of flow rate of bore fluid on fiber morphology, while other process parameters were held constant, is also illustrated in Fig. 15. For samples S19 and S20, increasing the flow rate from 5 ml/min to 8 ml/min increased the thickness of the middle layer with the finger-like structure from 30 μm to 35 μm and the inner diameter from 344 to 358 μm, respectively (Fig. 15a, b). Similarly, for samples S2 and S10, increasing the flow rate from 6.5 ml/min to 9.5 ml/min increased the inner diameter from 458 μm to 470 μm and the thickness of the middle layer from 43 μm to 56 μm, respectively (Fig. 15c, d). These data indicate that increasing the bore fluid flow rate through the spinneret led to elongated voids and produced a thinner fiber wall, resulting in enhanced PWP. This observation is attributed to increased internal pressure caused by a higher bore fluid flow rate on the inner surface of the hollow fiber during spinning, which in turn leads to a reduction in membrane wall thickness55. Additionally, at high flow rates, the bore fluid rapidly diffused into the doper solution. Consequently, phase inversion occurred faster throughout the dope solution relative to polymer migration towards the bore fluid. This led to the formation of a more homogeneous void distribution56.

    Fig. 15
    figure 15

    Effect of the bore fluid flow rate on morphology of hollow fiber cross section: (a) 5 ml/min, (b) 8 ml/min, (c) 6.5 ml/min, and (d) 9.5 ml/min.

    As explained above, the improved PWP of the developed HFMs can be attributed to their enhanced hydrophilicity and the presence of large finger-like pore structures in the middle layer. Meanwhile, the inner layer, characterized by its small pore size, is primarily responsible for the selectivity and separation performance of the membrane. For potential application in hemodialysis, this membrane design allows small solutes such as urea to pass freely, enabling effective removal of uremic toxins. At the same time, larger molecules, including proteins (e.g., albumin) and blood cells, are efficiently retained by the inner layer, thus preventing their leakage into the dialysate solution 57,58. Therefore, despite increased permeability, selective filtration is maintained.

    Continue Reading