Blog

  • Toyota GR Supra gets a V8, but you won’t be allowed to drive it. Here’s why

    Toyota GR Supra gets a V8, but you won’t be allowed to drive it. Here’s why

    Toyota GR Supra GEN3 supercar packs a roaring Lexus V8 engine and wild aero elements, but it’s a track-only model, not meant for regular roads.

    Toyota GR Supra GEN3 supercar packs a roaring Lexus V8 engine and wild aero elements, but it’s a track-only model, not meant for regular roads.

    The Toyota GR Supra is heading into its final production year, but the Japanese car manufacturer doesn’t want to send off the model quietly. While the road-going version is readying for retirement, there is a track-focused version making plenty of noise. The Toyota GR Supra has received a naturally aspirated V8 engine and a race-ready design. Disappointingly, this V8-powered GR Supra won’t be available on a dealer lot, but built specifically for Australia’s Supercars Championship in 2026, where it will compete with rivals like Chevrolet Camaro and Ford Mustang V8.

    The V8-powered Toyota GR Supra will draw power from a reworked 5.0-litre naturally aspirated V8 engine, which is the same all-aluminium quad-cam unit used in the Lexus LC 500, RC F and the 2019 Dakar-winning HiLux rally truck. However, the engine has been specially modified for the Supra. The auto OEM teased the project in 2025 for the first time. Toyota has teased the car recently with a short clip on Instagram.

    Toyota Australia has revealed how quickly the automaker decided on the V8 layout. A six-cylinder configuration was only considered for about a minute. It comes as a notable shift from the road-spec GR Supra, which has always used BMW-sourced engines, including a turbocharged 2.0-litre four-cylinder and a turbocharged 3.0-litre six-cylinder engine.

    Toyota GR Supra GEN3 supercar is not just about a V8 engine

    The Toyota GR Supra Gen3 supercar is not just about a powerful V8 engine, but an aggressive aero kit and upgrades with a racing chassis. Along with the new powerplant, the GR Supra gets a custom body kit designed by Toyota Australia, which sets it apart from other racing versions. The GR Supra comes with a more prominent splitter, wider fenders, and a large rear wing. Also, there is a set of 18-inch alloy wheels shod in slick tyres, and beefier AP Racing brakes, while other chassis and modifications remain under wraps for now.

    Toyota has revealed that the supercar will be produced in an ultra-limited number of six units only. Two will be fielded by Walkinshaw Andretti United, while the remaining four go to Brad Jones Racing. These cars will compete against V8-powered versions of the Ford Mustang and Chevrolet Camaro in the Supercars Championship.

    Toyota GR Supra GEN3’s first prototype is set to appear in Sydney on September 1st, ahead of its dynamic debut at the Bathurst 1000 race from October 9-12.

    Check out Upcoming Cars in India 2025, Best SUVs in India.

    First Published Date: 10 Aug 2025, 10:25 am IST

    Continue Reading

  • Feature fusion and selection using handcrafted vs. deep learning methods for multimodal hand biometric recognition

    Feature fusion and selection using handcrafted vs. deep learning methods for multimodal hand biometric recognition

    Hand biometrics, including fingerprints and palmprints, are widely used for secure identification due to their high identification accuracy and non-intrusiveness. It is one of the most used in multiple applications. Actually, multibiometrics and precisely multimodal biometrics are increasingly gaining popularity. They showcased their efficiency by delivering superior performance and increased universality. Recent applications focus on ensuring greater convenience and reduced user cooperation in the development of user-friendly biometric systems1. Hence, multimodality becomes essential to achieve user satisfaction and construct ergonomic security applications without compromising the required accuracy. Multimodal systems further enhance performance by fusing complementary features. Developing robust biometric solutions requires carefully balancing interpretability, efficiency, and accuracy. Traditional methods, using well-defined mathematical models, offer high interpretability and can be computationally efficient. However, they might fail to capture complex variations in biometric data. In contrast, deep learning2 methods excel at learning rich, hierarchical representations directly from data. Which leads to significant gain in accuracy and adaptability. However, it comes at the cost of reduced transparency and increased computational demand.

    Deep learning techniques are increasingly applied in palmprint and fingerprint recognition, with distinct methodologies tailored to specific challenges. Authors3 propose PRENet, a CNN-based ROI extraction method that ensures dimensionally consistent palmprint regions. Which is paired with SYEnet, a lightweight neural network leveraging four parallel learners for efficient feature extraction. Meanwhile, Authors4 introduce a self-supervised learning framework comprising two phases: (1) pretraining on unlabeled data and (2) self-tuning the model for refinement.

    For fingerprint recognition, LSTM is employed to model dynamic ridge flow patterns by analyzing spatial feature sequences5. Another approach combines Siamese networks with SIFT descriptors to isolate discriminative features from partial fingerprints6. Additionally, Gannet Bald Optimization (GBO) is utilized to train Deep CNNs, enhancing the classification of fingerprint patterns7. Table 1 summarizes the studied approaches.

    Table 1 Summary of recent literature approaches.

    In multimodal systems, the fusion of processed data can occur at various levels depending on the modalities. Decision and score levels are universally applied to all modalities8,9,10. At the feature level, fusion involves combining extracted features from each modality through data analysis techniques aimed at reducing high dimensionality11,12. Features encapsulate rich and discriminant data extracted from biometric modalities, thereby amplifying the overall effectiveness of the fusion process. Successfully executing fusion at this level holds significant promise for enhancing system performance and accuracy11. Authors13 study impact of multimodal feature proportion on matching performance. They find that unequal proportioned fused vector is more efficient than 50%−50%. As they use fingerprint and face, these results sound coherent since the two modalities have dissimilar matching performances. Moreover, if any features’ filtering is applied to get discriminant ones, the fused features will not ensure improvement of matching performance. Therefore, in feature fusion, we must consider accuracy of each uni-biometric system based on fused features. Each selected feature has an impact on the success of identifying individuals and discriminating between them. Furthermore, feature selection is necessary to preserve only discriminant features. This constitutes the key to improve identification rate and reduce template storage and processing time costs. This assertion finds support in a study conducted by Santos et al.14. This study explores data irregularities and their impact on the overall classification rate.

    In this paper, we compare traditional feature extraction (Gabor, Zernike) with deep learning (EfficientNetV2) in a multimodal (fingerprint + palmprint) framework. We analyze fusion strategies and selection robustness. First we start by extracting features from the two modalities using Gabor and Zernike moments. Through intensive testing, we identify the most suitable method for each modality. We then select the adapted one along with appropriate parameters. Subsequently, we employ various selection and classification methods. We aim by that to objectively assess the impact of fusion and feature selection on the identification rate. Then, we compare with deep learning extractor using the latest optimized and fastest version of Convolutional Neural Network: EfficientNetV215. Our main objective is to show and analyse the efficacy and limitations of both techniques while dealing with fusion and selection methods. This can be a challenging task when working with unbalanced performances of fused baseline systems. In fact, an effective design of the fusion process plays a critical role in boosting system performance. This is why it is important to study thoroughly features using ranking methods. The applied selection methods achieve good results in preserving classification accuracy. Therefore, we further analyse the stability of their ranking with samples variations using multiple metrics. The implementation code and associated results are available at Github.

    Selection and fusion of features

    Biometric features have significant influence over recognition or identification rates, potentially contributing positively or negatively. The curse of dimensionality, primarily responsible for limitations in efficiently processing high-dimensional feature spaces16, presents a considerable obstacle. While data richness can be advantageous, redundant features within intraclass and interclass contexts may impede performance. In multibiometric systems, fusion at a single high level (Rank, Decision, Score) does not guarantee accuracy enhancement, as different matchers can yield significantly diverse performances17. Therefore, striking a balance between the quantity and quality of features becomes crucial. Ensuring sufficient high-quality features capable of effectively identifying numerous classes is imperative for optimizing identification rates using classifiers.

    Feature extraction

    Feature extraction is the first and one of the most important steps in the biometric process. Texture extraction methods or operators as described by Amrouni et al.18, are perfect to process rich signals like fingerprint and palmprint. These modalities are known for their uniqueness and their universality, especially the palmprint texture18,19. Gabor filters and Zernike moments are commonly used to extract feature vector from texture images20. While minutia and principal lines-based methods extract characteristic points and features21, these two methods compute subspace and statistical information which remains invariant to rotation and translation22.

    Fingerprint Gabor features

    Gabor filters, a class of structure-based methods, are predominantly employed for edge detection and image enhancement23. Additionally, they serve as effective tools for pattern extraction, capable of capturing both local and global information within image texture24. In this paper, we utilize a 2D log-Gabor filter based on Fourier transform to compute features derived from phase congruency. The log-Gabor filter offers several advantages25. The non-presence of the DC-component ensures that the features remain unaffected by variations in the mean value. Its large bandwidth facilitates robust feature extraction across diverse datasets. The log-Gabor filter exhibits a Gaussian frequency response when plotted on a logarithmic frequency axis. This characteristic contributes to the filter’s effectiveness in capturing intricate texture patterns across different scales.

    Palmprint Zernike moments

    Zernike moments are widely used for different modalities classification such as fingerprint, face, iris, hand vein, finger knuckle, and signature26. The Zernike moments are invariant to rotation and can be invariant to scaling and translation using normalization of their orthogonal polynomials. In addition, they reach a near zero redundancy thanks to their orthogonal property27. In fact, independent features are extracted from moments of different orders. However, they are computationally more complex and time-consuming according to the chosen order. This can be overcome using fast computation algorithms28. The Zernike moments of order n and repetition m are defined by calculating each nth order of m repetitions Zernike polynomials in one loop as follows:

    $$:{V}_{nm}left(x,yright)={R}_{nm}left(rright){e}^{imtheta:}:,:rin:::[-text{1, 1}]$$

    (1)

    where R is the radial polynomial based on factorials.

    Deep learner extractor and classifier EfficientNET

    EfficientNET15 is a family of Convolutional Neural Networks with faster training speed and better parameter efficiency. It combines training-aware neural architecture search NAS with adaptive scaling technique called MBConv. It applies progressive learning and adapts the regularization to the image sizes. The goal of the new version EfficientNETV2 is to find a good compromise between improving training speed and increasing parameters.

    Feature selection

    Classification models need a comprehensive set of features that are both relevant and non-redundant, enabling effective discrimination between classes in supervised classification scenarios. Conducting feature classification analysis without classes’ label can be valuable for evaluating feature contribution. On the other hand, prioritizing the acquisition of a subset of features with superior discriminant power is paramount29. In our context, it is important to use both feature evaluation and feature subset search as we are fusing features of different modalities. These features, extracted using two distinct methods, coupled with the observed variability in classifier accuracies, underscore the significance of evaluating each feature’s independence and non-redundancy. It is equally important to evaluate each feature, as it is assumed to be independent and not redundant, then we test the effect of feature selection on the identification rate. Consequently, we assess the impact of feature selection on the identification rate, employing a diverse array of methods from various classes, including variable elimination techniques such as filter and wrapper methods, as well as embedded methods30.

    Next, we describe the different existing methods for feature selection that we use in our proposed scheme. These methods were carefully chosen to encompass the primary feature selection strategies, each employing distinct measures and criteria to evaluate the relevance of features31.

    Filter methods

    The filter-based methods evaluate correlations between features and the class label, and feature-to-feature or mutual information. They assess intrinsic properties of features independently of the classifier employed, offering simplicity and success across numerous applications through the utilization of a relevance criterion and a selection threshold30. Here, we introduce several filter methods utilized in our experiments, organized based on the relevance criterion:

    These methods utilize correlation measures between features with or without considering class membership. The principal methods selected for our experimentation include:

    Multi-Class Feature Selection (MCFS): An unsupervised feature selection method that assesses correlations between features independently of the class label29. It operates as a correlation-based feature selection method (CFS) for multi-class problems, leveraging Eigenvectors and L1-regularization.

    Correlation-based Feature Selection (CFS): A method based on correlation that evaluates the pairwise correlation of features and identifies relevant features that exhibit low dependence on other features. The ranking is accomplished through a heuristic search strategy32.

    The Relief-F Algorithm: is one of the six extensions of the relief algorithm. It is applied in multi-class problems and estimates the relevance of features to separate between all pairs of classes. It is the best approach, and it can provide results when stopped but it yields to better results with extended time and data33.

    These methods build a graph model of the features to keep the relevant ones.

    Laplacian: is based on Laplacian Eigenmaps and the Locality Preserving Projection34. Utilizing a nearest neighbor graph to model the local geometric structure of the data, it identifies relevant features that preserve the local structure based on their Laplacian Score.

    Infinite Latent Feature Selection (ILFS): it performs feature selection using a probabilistic latent graph that considers all the feature subsets35. Each subset is represented by a path that connects the included features. The relevancy is determined as an abstract latent variable evaluated through conditional probability. This approach enables weighting the graph based on the importance of each feature.

    These methods acknowledge the manifold structure, whether the class membership of the features is known or not.

    MUTual INFormation Feature Selection (Mutinffs): is a method predicated on mutual information as a criterion of correlation36. Mutual information serves as an invariant measure of statistical independence, capable of quantifying various relationships between feature-feature or features-class, including non-linear ones, thereby yielding a relevant subset of features.

    Unsupervised Discriminative Feature Selection (UDFS): seeks to select the most discriminative features using discriminative analysis and l2 1-norm minimization37. It considers the manifold structure of data based on the local discriminative information, as the class label is not used in the classification training.

    Wrapper methods

    A wrapper method explores the feature subset by considering the classification performance as an objective function30. It incrementally constructs the feature subset by iteratively adding and removing features to optimize the objective function and achieve the best classification performance.

    Feature Selection with Adaptive Structure Learning (FSASL): is an unsupervised learning approach that seeks to identify the most relevant features while preserving the intrinsic structure of the data. As such, it simultaneously conducts feature selection and data structure learning38. FSASL leverages a matrix of Euclidean Distance induced probabilistic neighborhood for global manifold and induced Laplacian for local manifold to achieve this objective.

    The Dependence Guided Unsupervised Feature Selection (DGUFS): is based on a joint learning framework that uses a projection-free with l2,1- norm39. The model, which places heightened emphasis on geometric structure and discriminative information, as well as Hilbert-Schmidt Independence Criterion, is tackled using an iterative algorithm.

    Local Learning Clustering Feature Selection (LLCFS): integrates feature selection into an unsupervised Local Learning Clustering (LLC) framework, employing regression trained with neighborhood information. The nearest neighbors’ selection performed using τ-weighted square Euclidean distance and a kernel learning are applied to overcome LLC limitations against irrelevant features40.

    Embedded methods

    These methods integrate the feature selection into the training step to build an efficient selection model without increasing the computation time by evaluating different subsets recurrently like wrapper methods30.

    Feature Selection concaVe (FSV): it is a feature selection method based on concave minimization. The concave minimization aims to minimize a bilinear function on a polyhedral set of vectors. The feature selection is integrated into the training step using a linear programming technique41.

    Support Vector Machine-Recursive Feature Elimination (SVM-RFE): embeds Recursive Feature Elimination within SVM classification, utilizing weight magnitude as the ranking criterion42. A good ranking feature method does not necessarily provide a good subset of features. Therefore, evaluating the effect of removing one feature or more, that has the smallest ranking value, at a time seems to be interesting. It allows to construct gradually an optimized feature subset. The SVM-RFE employs a selection method that is an instance of the greedy backward selection.

    Feature classification

    We selected the following methods, widely referenced in classification literature30, with the aim of identifying the most suitable classifiers for our data distribution. We aim by this to testify the stability of the selected feature43.

    Regularized linear discriminant analysis (RLDA)

    This method is based on Linear Discriminant analysis that uses regularization to avoid data training failures44. Discriminant Analysis proves to be straightforward due to the relationship between the number of features and the number of samples. Specifically, for palmprint data, the number of features is less than the number of samples, while for fingerprint data, it is the opposite. Therefore, we utilize both minimal and maximal regularization to assess features’ independence and validate the competitiveness of this method compared to others.

    K-Nearest neighbor (KNN)

    It is a straightforward and effective supervised machine learning method that employs various distances and retains all training data while performing computations at runtim34,45 We examine the following distances: Euclidean, Cosine, Spearman and Correlation. In addition, we vary the number of neighbours in the range [2–10] and apply weighting.

    Multi-Class support vector machine (MC-SVM)

    There are different strategies of multi-class SVM where the principles are based on multiplying binary SVM and training them using one of the following strategies: One against One (OAO) and One Against All (OAA)46. We opt for the OAO SVM, as it is more suitable for biometric identification, employing a classifier for each pair of classes rather than one for each class.

    Feature fusion

    The concept of fusion has garnered significant attention across various research domains, including biometrics47. In the realm of biometrics, fusion techniques are motivated by the complementary nature of modalities and, conversely, the challenge posed by a lack of discriminant features48. Indeed, leveraging different modalities enables the construction of robust and adaptable biometric systems capable of mitigating the impact of feature scarcity. Nonetheless, fusion alone does not guarantee performance enhancement. The conventional approach of combining features through simple concatenation may lead to increased dimensionality, which can impede computational efficiency48. Recent research efforts have therefore focused on proposing quality-based fusion models49,50. Consequently, there arises a need to reduce feature dimensionality and prioritize the most relevant features for classification. Feature selection emerges as a crucial step in biometric fusion, offering performance stability without compromising classification efficiency. In this study, we aim to implement feature selection on fused vectors using methods outlined in Sect. 2.2, subsequently examining the impact of the resulting feature set on the classification process. Through the application of diverse selection methods, our objective is to evaluate feature ranking based on different criteria and stability metrics. Thus, we consider the impact of the applied selection on the identification and equal error rates.

    Continue Reading

  • Prediction of speed of sound of deep eutectic solvents using artificial neural network coupled with group contribution approach

    Prediction of speed of sound of deep eutectic solvents using artificial neural network coupled with group contribution approach

    In the previous section the ANN and ML methodologies for the prediction of the speed of sound of DESs has been described and depicted in Figs. 1 and 2. In this section the ANN + GC and ML + GC approaches have been described separately.

    ANN + GC method

    The number of independent variables (input layer) plays a crucial role in ANN and ML methods63. In Table 1, thermodynamic properties of DESs containing Tc, Vc, MW, and ω have been reported. In the case of ANN method, different input properties and different numbers of neurons in one and two hidden layers have been considered, because one hidden layer only does not lead to adequate results64,65,66,67. The number of the hidden layer and neuron in the hidden layer and output are obtained using a trial and error algorithm. In this work the Levenberg–Marquardt algorithm68,69 is used to optimize the aforementioned parameters. The results show that, one hidden layer containing 16 neurons and four input properties containing the critical volume (Vc), molecular weight (Mw), temperature, and acentric factor (ω) are the optimum values. As described in section “Theory and methodology”, 300 training and 115 testing data points of the speed of sound have been considered.

    As mentioned in Eq. (7), the weight parameters between neurons in hidden layers are essential to develop the network. In Table 4 the weight parameters of neurons have been reported.

    Table 4 The weight of hidden and output layers for 16 neurons.

    The optimum values of neurons in the hidden layer are evaluated using the average relative deviation percent (ARD%). When the optimum network architecture was determined, the input data of the ten DESs were fed to the network to predict their speed of sound. In Fig. 3 the flowchart of the proposed ANN model has been depicted.

    Fig. 3

    Flowchart of proposed model.

    As shown in Fig. 3, the model can predict the speed of sound of DESs using independent variables containing T, Vc, ω, and MW. The inputs containing Vc, and ω can be estimated using GC approaches45,46. Experimental literature data is used as the training dataset for sound speed. An ANN with one hidden layer containing sixteen neurons is employed to develop the model. Four input variables can be fed into the saved file to generate predicted sound speed. Using the “saved network” and these four inputs, the sound speed of DESs can be accurately predicted. The complete MATLAB code, including all source files used in the programming, is provided in the Supplementary Material. The correlated and predicted results of the ANN + GC approach have been shown in Fig. 4.

    Fig. 4
    figure 4

    The results for the correlated (a) and predicted (b) speed of sound using the ANN + GC approach.

    Figure 4 shows that the ANN + GC approach can correlate the speed of sound of DESs over a wide range of temperatures, satisfactory. The ARD% and R2 of the correlated speed of sound have been obtained 0.032% and 0.9988, respectively. Figure 4b shows the prediction of the speed of sound of ten DESs using the ANN + GC approach. The results are in good agreement with experimental data. Figure 5 shows a simultaneous comparison between the experimental data and ANN + GC data.

    Fig. 5
    figure 5

    The speed of sound of DESs obtained using the ANN + GC. () Experimental data and (∆) ANN + GC.

    As shown in Fig. 5, the ANN + GC method can correlate the experimental data accurately. Distributions of the deviation points for the ANN + GC method are shown in Fig. 6.

    Fig. 6
    figure 6

    Deviations between calculations from ANN + GC and experimental speed of sound data at different temperatures.

    As shown in Fig. 6, the deviations between the ANN + GC predictions and experimental data do not exceed 4 m/s. Error analysis indicates that the proposed network is suitable for engineering calculations. In this study, the predictive performance of the ANN + GC model was assessed using R2, ARD%, SD, MAE, and RMSE metrics; see Table 5.

    Table 5 Statistical error analysis for the ANN + GC model.

    As shown in Table 5, the total ARD%, MAE, SD, RMSE, and R2 values have been obtained 0.032%, 1.5656, 0.0549, 2.227, and 0.9988, respectively. The results of the ANN + GC approach show good agreement with experimental data. Models with high R2 values nearing 1 and low values for ARD%, RMSE, MAE, and SD are considered more accurate in predicting the speed of sound. In this study, the ARD% for the training and testing phases of the ANN + GC model were 0.024% and 0.053%, respectively. The overall R2 value approached unity at 0.9988. These results indicate that the ANN + GC model can accurately correlate the speed of sound in DESs across a wide temperature range. In the next section, the ML + GC model has been studied.

    ML + GC method

    Similar to the ANN + GC method, the inputs for the ML + GC model included Vc, ω, Mw, and T. Additionally, 300 training and 115 testing data points of the speed of sound were used to develop the machine learning approach. The statistical metrics for the CatBoost model are summarized in Table 6, which presents evaluations on the training and testing subsets (300 and 115 data points, respectively), as well as the complete dataset consisting of 415 points. In this study, the predictive performance of the models was assessed using R2, ARD%, SD, MAE, and RMSE metrics. Comparing model predictions with experimental data across both training and testing datasets provides valuable insights into the models’ accuracy and generalization capability; see Table 6.

    Table 6 Statistical error analysis for the ML + GC model using CatBoot approach.

    The greater the alignment between the predicted value and the experimental data, the higher the accuracy of the predictive model. In Figs. 7 and 8, the error distribution plot of the presented model vs the predicted speed of sound has been depicted. This visual representation demonstrates the robust agreement between the experimental data and the forecasts produced by the CatBoost ML method.

    Fig. 7
    figure 7

    Error distribution plot of the ML + GC model to predict speed of sound.

    Fig. 8
    figure 8

    Cross-plot of the ML + GC model to predict speed of sound.

    Figures 7 and 8 illustrate a strong correlation between the model-predicted data and the experimental data across both the training and testing datasets. These figures demonstrate a very close alignment between the model predictions and experimental points. In this research, graphical analysis complemented statistical methods to provide a more comprehensive evaluation of the models’ performance. These visual representations played a vital role in assessing the accuracy and reliability of the models. The percentage distribution of the relative error against the experimental values is presented in Fig. 7. In this type of error evaluation, relative error values are plotted against experimental output values. The closer the data points are to the zero-error line, the model is the more accurate. When the data points are scattered around the zero line, it indicates a significant difference between the predicted values and the experimental data, which proves the high error of the model. As a result, the proximity of the data points to the zero line for the ML + GC model indicates the high accuracy of this model. In Fig. 8, the cross-plot has been depicted. The cross-plot visually represents the comparison between predicted and experimental values. A closer alignment of data points with the unit slope line (Y = X) in the cross plot signifies higher accuracy and effectiveness of the model. The ML + GC model shows significant performance with most of the data points lying around the Y = X line. In the next section, a comparison between ANN + GC, ML + GC, and the correlation-based models has been investigated.

    Comparison between ANN + GC, ML + GC, and the correlation-based models

    The ANN + GC and ML + GC results have been compared to five correlation-based models24,70,71,72,73. Singh and Singh proposed a correlation for speed of sound based on the surface tension and density70. Hekayati and Esmaeilzadeh suggested a novel interrelationship between surface tension (σ), density (ρ), and speed of sound (u) of ILs71. Gardas and Coutinho proposed a relationship between surface tension (σ), density (ρ), and speed of sound (u) for imidazolium based ILs, covering wide ranges of temperature, 278.15–343.15 K73. The aforementioned models are correlation-based. In Table 7 the ARD% of five correlation-based, ML + GC, and ANN + GC models have been reported and compared.

    Table 7 ARD% values of ANN + GC, ML + GC, and five correlation-based models.

    The average ARD% value of Peyrovedin et al.24 model was obtained 5.67%. ARD% values of Haghbakhsh et al.’s model72, Hekayati and Esmaeilzadeh’s model71, and Gardas and Coutinho’s model73 for 38 DESs have been obtained 9.52%, 9.38%, and 9.45%, respectively. The average ARD% value of Singh and Singh’s model70 was obtained about 39%. They correlated the speed of sound of ILs using surface tension and density data. As shown in Table 7, the Peyrovedin et al.24 model gives a lower ARD% value compared to other correlation-based models. The ANN + GC and the ML + GC models give lower error values compared to correlation-based models. The ARD% of the ANN + GC model is slightly lower than the ML + GC model. In Fig. 9 the speed of sound of some DESs using the ANN + GC approach have been compared to experimental data.

    Fig. 9
    figure 9

    Prediction of speed of sound of DESs using ANN + GC approach. Lines are model prediction and symbols are experimental data. Standard uncertainty of DES speed of sound is 1.0 m/s.

    As shown in Fig. 9, the ANN + GC correlates the speed of sound of DESs satisfactory. The average ARD% of the ANN + GC model was obtained 0.032%. In Fig. 10, the ANN + GC model results have been compared to experimental data and H. Peyrovedin et al. model.

    Fig. 10
    figure 10

    Correlation of speed of sound of DESs using ANN + GC approach (lines) and Peyrovedin et al. model (dashed-lines). Symbols are experimental data. Standard uncertainty of DES speed of sound is 1.0 m/s.

    As depicted in Fig. 10, the ANN + GC approach correlates the speed of sound of four DESs at various temperatures accurately. In the case of DES1, the ARD value of H. Peyrovedin et al. model is higher than ANN + GC, nevertheless, their obtained results are acceptable. Figure 10 shows that, their proposed correlation is accurate at lower temperatures, and the model deviations are increased by increasing temperature. As reported in Table 7, and Figs. 9 and 10, the average ARD% value of the testing and training results of the ANN + GC are acceptable. In Fig. 11, the ANN + GC model has been compared to the ML + GC and five correlation-based models.

    Fig. 11
    figure 11

    Comparison of the behavior of the speed of sound of DES 4 versus the temperature for the ANN + GC model, ML + GC model, and literature models. (-) ANN + GC, (–) ML + GC, (- – -) H. Peyrovedin et al.24, (-..-) Haghbakhsh et al.’s model72, (…) Hekayati and Esmaeilzadeh’s model 71, (-.-) Gardas and Coutinho’s model73, and (=.=) Singh and Singh’s model70. Symbols are experimental data.

    As shown in Fig. 11, the average ARD% values of the ANN + GC approach are lower than the correlation-based models. The average error values of ANN + GC and ML + GC models are comparable. In Fig. 12, the error distribution plot for ten DESs has been depicted.

    Fig. 12
    figure 12

    Error distribution plot for ten DESs. Corr_1, Corr_2, Corr_3, Corr_4, and Corr_5 refer to H. Peyrovedin et al.24, Haghbakhsh et al.’s model72, Hekayati and Esmaeilzadeh’s model71, Gardas and Coutinho’s model73, and Singh and Singh’s model70.

    Cumulative frequency diagrams are one of the graphical methods used for evaluating model performance74. Figure 13a and b illustrate the cumulative frequency diagrams of the ANN + GC and ML + GC models, along with five correlations (as reported in Table 7).

    Fig. 13
    figure 13

    Cumulative frequency plot for all studied DESs. (a) ANN + GC and ML + GC methods, (b) Corr_1, Corr_2, Corr_3, Corr_4, and Corr_5 refer to H. Peyrovedin et al.24, Haghbakhsh et al.’s model72, Hekayati and Esmaeilzadeh’s model71, Gardas and Coutinho’s model73, and Singh and Singh’s model70, respectively.

    As shown in Fig. 13a, approximately 90% of the values estimated by the ANN + GC model exhibited an ARD% of less than 0.07%. In the case of ML + GC model, 90% of the ARD% values are less than 0.1%. In Fig. 13b, the cumulative frequency of five correlation-based models has been depicted. The correlation developed by Singh and Singh’s70 demonstrated poor performance. The results show that, the ANN + GC model achieves high precision in forecasting speed of sound of DESs compared to the five correlation-based model.

    The leverage approach for model analysis

    The leverage approach is a valuable tool for ensuring the quality and reliability of statistical models. Identifying and addressing high-leverage points, can improve model accuracy, enhance data understanding, and lead to more informed decision-making75. Leverage values help identify observations that have a disproportionate influence on the regression coefficients. Points with high leverage and large residuals are particularly problematic, as they can significantly distort the model fit. Leverage diagnostics are used during model validation to assess the stability and generalizability of the model. If the model is highly sensitive to a few high-leverage points, it may not perform well on new data. High-leverage points often indicate data errors or unusual events. Identifying these points allows for a targeted investigation of the data to identify and correct errors or to understand the underlying causes of the unusual observations. High leverage points can sometimes indicate the need to include additional predictor variables in the model. In some cases, transforming the predictor or response variables can reduce the influence of high-leverage points and improve the model fit. As well, investigating high-leverage points can provide valuable insights into the data and the underlying processes that generated it. In this study the leverage approach has been utilized to study the ANN + GC model. In this regard, standardized residuals (SR) and Leverage values, derived from the diagonal elements of the hat matrix have been calculated. The hat matrix was given by:

    $$H = Xleft( {X^{t} X} right)^{ – 1} X^{t}$$

    (14)

    where (X^{t}) refers to the transpose of matrix X. The critical leverage was calculated as 3(n + 1)/m. where m and n represent the number of data points and model input variables, respectively. The applicability domain of the ANN + GC model can be assessed by plotting standardized residuals against leverage values (Williams plot). The Williams plot is the most common and direct way to do this. The applicability domain (AD) of a model is the region where the model is considered reliable for making predictions. In simpler terms, it’s the set of conditions under which you can trust the model’s output. Extrapolating beyond the AD can lead to inaccurate or unreliable predictions.

    By plotting standardized residuals against leverage values against each other, the Williams plot allows you to identify observations that:

    • Are outliers (large standardized residuals)

    • Have high leverage (unusual predictor values)

    • Are both outliers and have high leverage (potentially very influential and problematic)

    If the majority of data points were situated within the boundaries of the 0 ≤ H ≤ critical leverage, and—3 ≤ SR ≤ 3, the established model is deemed reliable, and its predictions are confined within the applicability domain75. In Fig. 14, the William’s plot is illustrated.

    Fig. 14
    figure 14

    The Williams plots for outlier detection using the ANN + GC model.

    As shown in Fig. 14, the critical leverage value has been obtained about 0.0545. As depicted in Fig. 14, most of the data point falls between 0 ≤ H ≤ 0.0545, and—3 ≤ SR ≤ 3. The results indicated that, the ANN + GC model is highly reliable. There are some suspicious data (SR > 3 or SR <—3). Figure 14 shows that, only five data points have an SR-value outside the range of—3 to 3, classifying them as questionable data. On the other hand, all data points have H values lower than 0.0545. This result indicated that all data points have satisfactory leverage. The Leverage approach confirms the accuracy of databank and the high reliability of ANN + GC model in estimating speed of sound of DESs.

    In the next section, the sensitivity analysis (SA) of input variables in the ANN + GC model has been studied.

    Sensitivity analysis

    Sensitivity analysis in ANNs involves determining how much each input variable influences the network’s output. It helps you understand which inputs are most important and how changes in those inputs affect the model’s predictions. Sensitivity analysis using weight-based methods involves evaluating the influence of input variables on the output by analyzing the weights within the network. These methods are generally more straightforward and computationally less expensive than perturbation-based methods. Garson suggested an equation based on partitioning of connection weights for sensitivity analysis of input variables as follows76:

    $$IF_{j} = frac{{mathop sum nolimits_{m = 1}^{Nh} left( {left( {frac{{left| {w_{jm}^{ih} } right|}}{{mathop sum nolimits_{k = 1}^{Ni} left| {w_{km}^{ih} } right|}}} right).w_{mn}^{ho} } right)}}{{mathop sum nolimits_{k = 1}^{Nh} left{ {mathop sum nolimits_{m = 1}^{Nh} left( {left( {frac{{left| {w_{km}^{ih} } right|}}{{mathop sum nolimits_{k = 1}^{Ni} left| {w_{km}^{ih} } right|}}} right).w_{mn}^{ho} } right)} right}}}$$

    (15)

    where IFj is the relative importance of the jth input variable on output variable; Ni and Nh refer to the number of input and hidden neurons, respectively. The superscripts i, h and o refer to input, hidden and output layers, respectively. The subscripts k, m and n refer to input, hidden and output layers, respectively. w is connection weights. The relative importance of input variables (IFj) were calculated by Eq. (15). This approach expands on Garson’s method by considering the direct and indirect paths from inputs to outputs. It involves calculating the influence of each input across the network layers into the final output. In Fig. 15 the importance of input variables based on normalized percentage has been depicted.

    Fig. 15
    figure 15

    Relative importance (%) of input variables on the value of the speed of sound of DESs.

    It is evident that all selected input variables have a strong influence on the speed of sound values, with importance levels ranging from 21 to 29%. However, it is important to note that highly nonlinear models or coupled input variables can complicate sensitivity analysis. These results highlight which inputs have the most significant impact on the output, aiding in model refinement, feature selection, or providing insight into the underlying process. As shown in Fig. 15, the contributions are typically normalized to sum to 1 (or 100%) to facilitate easier interpretation of the results.

    In summary, ANN methods have several key advantages and disadvantages77. They can model complex nonlinear relationships by selecting an appropriate architecture through trial and error. Once the input layer, the number of neurons, and hidden layers are established, ANNs can predict values beyond those considered during training without reprogramming. However, acquiring large datasets is often challenging and time-consuming. Additionally, the complexity of ANNs can make their implementation difficult. Another drawback is that ANNs require robust central processing units (CPUs) or hardware, which can be resource-intensive. This study demonstrates the strong performance of ANN models in predicting second-order derivative thermodynamic properties, such as the speed of sound in DESs, despite the aforementioned limitations. Traditionally, equations of state (EoS) models have been widely used to estimate the thermo-physical properties of complex systems like ILs and DESs. However, predicting the speed of sound using EoS-based models requires the ideal gas heat capacity of the pure components. Estimating this property using GC models often results in significant deviations in some cases. Consequently, researchers are seeking alternative approaches to predict the speed of sound and specific heat capacity without relying on ideal gas heat capacity estimations. This work shows that the ANN + GC method can be considered a robust and efficient alternative, particularly for predicting second-order derivative thermodynamic properties, such as the speed of sound.

    Continue Reading

  • The drone challenge – Newspaper

    The drone challenge – Newspaper

    THE use of drones in warfare is no longer novel. From the battlefields of Ukraine to the brief India-Pakistan stand-off in May, unmanned aerial systems have steadily shaped the way conflicts are fought. Once confined to the arsenals of state militaries, drones have now become part of non-state actors’ toolkits too. This transformation has been gradual but deliberate, as militant groups learn to adapt commercially available platforms or assemble improvised systems for their tactical needs.

    This global evolution has found its echo in Pakistan’s tribal districts, where Islamist militants have begun moving from occasional experimentation to more consistent and coordinated drone operations. The shift is not merely technical. It carries a political undertone, signalling to local populations and state actors alike that militants are capable of matching the technological edge of formal security institutions.

    Pakistan’s security forces were already using drones for surveillance, reconnaissance, and precision targeting. But the appearance of similar technology in militant hands has complicated the battlefield equation. The skies over conflict zones now hold an ambiguity: a drone sighted overhead may belong to either side, and this blurring of lines heightens operational and psychological uncertainty.

    For residents, the growing drone presence is unsettling. Many remain unsure whether the military or militants fly the machines they hear. This uncertainty steadily corrodes trust in the state’s ability to protect them. The fear is not abstract. Farmers delay work in their fields, shopkeepers close their shutters earlier than usual and inter-village travel is restricted. The constant buzzing overhead imposes a psychological toll, making life feel precarious even when no attack occurs. Over time, such conditions widen the social distance between communities and the state.

    The appearance of drone technology in militant hands has complicated the battlefield equation.

    The past few months have seen an intensification of militant violence in KP. According to local monitoring groups, July alone recorded 28 militant attacks in the province, several of them high-profile. Bannu’s Miryan Police Station came under repeated aerial assaults. The deadliest incident was in Orakzai, where eight paramilitary personnel lost their lives. This was particularly significant because Orakzai had long been considered one of the more secure districts compared to the southern belt, a perception now eroding as the militant influence spreads northwards.

    The southern districts remain tense, with curfews imposed in multiple locations. On July 24, Tank district’s Jandola subdivision was placed under a daylong lockdown, while similar restrictions were applied in Birmal tehsil of lower South Waziristan due to credible threats. In Bajaur, a three-day curfew preceded a significant security operation aimed at clearing militant strongholds. Such measures underline not only the fragile security environment but also the strain on state capacity in containing the threat.

    For now, militant drone warfare in Pakistan remains in its formative phase. Accuracy is inconsistent, and much of the activity still bears the hallmarks of experimentation. Yet security officials believe the trajectory is clear: as militants gain confidence and refine their techniques, both the frequency and lethality of these strikes will likely increase, a pattern already seen in other conflict theatres.

    For these groups, drones offer versatility. They can deliver improvised explosive devices or small munitions with relative precision, using low-cost quadcopters that are easy to source or assemble. They can map checkpoints, log patrol timings, identify weak points in security perimeters, and spot targets for indirect fire or suicide attacks, relaying live video to command teams in real time. Drones are also used for logistical purposes, moving Sim cards, batteries, medical supplies or technical components between cells in areas where ground movement is too risky. Beyond their functional role, drones serve as tools of psychological warfare, producing aerial footage for propaganda, amplifying the perception of militant reach and eroding the sense of state control.

    The state’s response is beginning to adapt to this emerging threat. Towards the end of July, police in KP deployed anti-drone technology for the first time during an operation in Bannu. An anti-drone gun sent from Peshawar successfully disrupted hostile UAV activity, forcing militants to abort movements. Authorities now intend to expand the deployment of such systems across the southern districts, integrating them into counter-militancy operations. Police officials express optimism that this capability will allow faster, more precise responses and help blunt the evolving tactics of militant groups.

    Responsibility for these attacks is not always straightforward to establish. In Bannu, most incidents have been attributed to the Hafiz Gul Bahadur group. Yet in several cases, the perpetrators remain unidentified. Security forces have generally blamed the TTP, but the group, in some of its official statements, has denied operational involvement, claiming it is still in the process of acquiring drone technology.

    Abdul Sayed, a Sweden-based researcher, offers a more nuanced account. He notes that since 2024, footage of drone strikes has been quietly circulated through TTP’s unofficial media channels, while official communications have avoided direct acknowledgement. This changed in 2025, when the TTP faction Ittehad began openly claiming drone attacks. The move drew criticism from pro-TTP voices on Telegram, who argued that public acknowledgement was strategically unsound. Nevertheless, Ittehad persisted, and eventually, the TTP leadership followed suit, marking a shift in propaganda posture.

    From discreet experimentation to open claims, the evolution of militant drone use indicates that these systems are becoming increasingly entrenched in Pakistan’s internal security landscape. The country’s security challenges were already complex and multifaceted, and the introduction of drone technology adds a new layer of complication. While Islamist militants in KP’s tribal districts are the most frequent users, there is also evidence suggesting that Baloch insurgents have adopted this technology, as indicated by several video clips and reported terrorist attacks.

    If left unchecked, the spread of drone use could deepen mistrust between the state and local populations, further destabilising fragile districts.

    The writer is a security analyst.

    Published in Dawn, August 10th, 2025

    Continue Reading

  • Former PCB chief curator Tony Hemming gets major role in BCB – International

    Former PCB chief curator Tony Hemming gets major role in BCB – International

    This undated picture shows former PCB chief curator Tony Hemming. — PCB

    DHAKA: The Bangladesh Cricket Board (BCB) on Saturday reappointed Tony Hemming, the former chief curator of the Pakistan Cricket Board (PCB), as its Head of Turf Management on a two-year contract.

    Hemming recently resigned from his role as PCB’s head curator, previously worked with the BCB from July 2023 to July 2024.

    BCB’s cricket operations chairman, Jalal Yunus Rahman, confirmed Hemming’s appointment.

    “Tony Hemming has been appointed as head of turf management for two years,” Rahman said. 

    “All our international venues and curators will be under his supervision, and he will also oversee the training of Bangladeshi curators. There was strong interest from our board directors in bringing him back.”

    Rahman praised Hemming’s credentials, calling him ‘one of the best curators in the world,’ and suggested that his previous positive experience with the BCB influenced his decision to return.

    For the unversed, Hemming has relinquished his role, the Pakistan Cricket Board (PCB) confirmed on Saturday.

    Hemming assumed the role in July last year on a two-year contract, but has stepped down after 13 months.

    Notably, the Western Australian joined the PCB after his contract with the Bangladesh Cricket Board (BCB) had expired.

    During his stint as PCB chief curator, Hemming prepared pitches for Pakistan’s ICC World Test Championship matches against Bangladesh (two in August/September) and England (three in October) last year.

    Hemming also curated pitches for the ICC Men’s Champions Trophy 2025, held in Pakistan from February 19 to March 9.

    For the unversed, Tony Hemming is a highly respected curator with nearly four decades of experience.

    He has worked at various iconic cricket grounds in Australia, including Melbourne, Perth and Tasmania, as well as in countries such as Bangladesh, Qatar, Saudi Arabia and the United Arab Emirates, where he was the ICC’s Head Curator from 2007 to 2017 in Dubai.

    During his time with the ICC, Hemming also oversaw pitch preparation at the Dubai International Cricket Stadium, which was one of Pakistan’s home venues between 2009 and 2019.

    It must be noted that Hemming had replaced Zahid, who started his career by joining the chairman of the PCB Curators Committee in 2001.

    Zahid was later appointed as the chief Curator in 2004. He resigned from this role back in 2020 but was reappointed by former PCB chairman Ramiz Raja in 2021.

    Continue Reading

  • 7 dumpers set on fire after siblings crushed to death in Karachi – Pakistan

    7 dumpers set on fire after siblings crushed to death in Karachi – Pakistan

    An angry mob set seven dumper trucks on fire after two siblings were killed in a collision between their motorcycle and a speeding dumper on Rashid Minhas Road in the early hours of Sunday.

    The accident occurred near Lucky One Mall, leaving the siblings’ father critically injured. According to Rescue 1122, 22-year-old Mahnoor and 14-year-old Ali Raza succumbed to their injuries while being shifted to hospital, while their father remains under treatment.

    Following the tragedy, enraged citizens torched multiple dumpers and assaulted the driver involved, who was later handed over to police.

    SSP Central said the driver had been taken into custody, while over 10 people were detained in connection with the unrest.

    Two die as dumper hits van in Karachi

    Rashid Minhas Road was closed to traffic after the incident. Rescue 1122 said the fire had been extinguished and cooling operations completed.

    Karachi Traffic Police reported that both tracks of the Al-Asif to Hyderabad route remained closed for hours due to the torching incident, with traffic diverted via Total Petrol Pump as an alternative route.

    In retaliation, the Dumper Drivers Association blocked the Super Highway near Sohrab Goth, disrupting traffic flow, and warned of a National Highway blockade.

    Association president Liaquat Mehsud alleged that the accident involved a tanker, not a dumper, and held the Sindh government responsible for the damage to seven vehicles.

    Sindh Governor Kamran Tessori expressed sorrow over the deaths, prayed for the recovery of the injured, and urged strict action against the dumper driver and those endangering public safety.

    He also appealed to citizens not to take the law into their own hands.

    Continue Reading

  • Scientists just found a tiny molecule that could change how we lose weight

    Scientists just found a tiny molecule that could change how we lose weight

    The obesity rate has more than doubled in the last 30 years, affecting more than one billion people worldwide. This prevalent condition is also linked to other metabolic disorders, including type 2 diabetes, cardiovascular diseases, chronic kidney disease, and cancers. Current treatment options include lifestyle interventions, bariatric surgery, and GLP-1 drugs like Ozempic or Wegovy, but many patients struggle to access or complete these treatments or to maintain their weight loss afterwards.

    Salk Institute scientists are looking for a new treatment strategy in microproteins, an understudied class of molecules found throughout the body that play roles in both health and disease. In a new study, the researchers screened thousands of fat cell genes using CRISPR gene editing to find dozens of genes that likely code for microproteins — one of which they confirmed — that regulate either fat cell proliferation or lipid accumulation.

    The findings, published in Proceedings of the National Academy of Sciences on August 7, 2025, identify new microproteins that could potentially serve as drug targets to treat obesity and other metabolic disorders. The study also showcases the value of CRISPR screening in future microprotein discovery.

    “CRISPR screening is extremely effective at finding important factors in obesity and metabolism that could become therapeutic targets,” says senior author Alan Saghatelian, a professor and holder of the Dr. Frederik Paulsen Chair at Salk. “These new screening technologies are allowing us to reveal a whole new level of biological regulation driven by microproteins. The more we screen, the more disease-associated microproteins we find, and the more potential targets we have for future drug development.”

    Current obesity and metabolic disorder therapeutics

    When our energy consumption exceeds our energy expenditure, fat cells can grow in both size and number. Fat cells store the excess energy in the form of fatty molecules called lipids. But while some excess storage is manageable, too much can cause fat deposits to accumulate around the body — leading to whole-body inflammation and organ dysfunction.

    Many factors regulate this complex energy storage system. The problem is, how do we find them all, and how do we filter for factors that may make good therapeutic candidates?

    This has been a longstanding question for Salk scientists. In fact, Salk Professor Ronald Evans has been working on it for decades. Evans is an expert on PPAR gamma, a key regulator of fat cell development and a potent target for treating diabetes. Several drugs have been developed to target PPAR gamma to treat obesity, but they resulted in side effects like weight gain and bone loss. An ideal PPAR gamma-based obesity therapeutic has yet to hit the market.

    When PPAR gamma drugs fell short, GLP-1 drugs entered the scene. GLP-1 is a peptide small enough to be considered a microprotein, and it serves as a blood sugar and appetite regulator. But, like PPAR gamma, GLP-1 drugs have their own shortcomings, such as muscle loss and nausea. Nonetheless, the popularity of GLP-1 drugs demonstrates a promising future for microprotein drugs in the obesity therapeutic space.

    Saghatelian’s team is now searching for the next microprotein therapeutic with new genetic tools that bring microproteins out of the “dark.” For many years, long stretches of the genome have been considered “junk” and thus left unexplored. But recent technological advances have allowed scientists to look at these dark sections and find a hidden world of microproteins — in turn, expanding protein libraries by 10 to 30 percent.

    In particular, the Salk team is using innovative CRISPR screening to scour the “dark” for possible microproteins. This approach is enabling the simultaneous discovery of thousands of potential microproteins involved in lipid storage and fat cell biology, accelerating the search for the next PPAR gamma or GLP-1 drug.

    How CRISPR screening accelerates the search for microproteins

    CRISPR screens work by cutting out genes of interest in cells and observing whether the cell thrives or dies without them. From these results, scientists can determine the importance and function of specific genes. In this case, the Salk team was interested in genes that may code for microproteins involved in fat cell differentiation or proliferation.

    “We wanted to know if there was anything we had been missing in all these years of research into the body’s metabolic processes,” says first author Victor Pai, a postdoctoral researcher in Saghatelian’s lab. “And CRISPR allows us to pick out interesting and functional genes that specifically impact lipid accumulation and fat cell development.”

    This latest research follows up on a prior study from Saghatelian’s lab. The previous study identified thousands of potential microproteins by analyzing microprotein-coding RNA strands derived from mouse fat tissues. These microprotein-coding RNA strands were filed away to await investigation into their functions.

    The new study first expanded this collection to include additional microproteins identified from a pre-fat cell model. Notably, this new model captures the differentiation process from pre-fat cell to a fully mature fat cell. Next, the researchers screened the cell model with CRISPR to determine how many of these potential microproteins were involved in fat cell differentiation or proliferation.

    “We’re not the first to screen for microproteins with CRISPR,” adds Pai, “but we’re the first to look for microproteins involved in fat cell proliferation. This is a huge step for metabolism and obesity research.”

    Microproteins of interest and next steps

    Using their mouse model and CRISPR screening approach, the team identified microproteins that may be involved in fat cell biology. They then narrowed the pool even further with another experiment to create a shortlist of 38 potential microproteins involved in lipid droplet formation — which indicates increasing fat storage — during fat cell differentiation.

    At this point, the shortlisted microproteins were all still “potential” microproteins. This is because the genetic screening finds genes that may code for microproteins, rather than finding the microproteins themselves. While this approach is a helpful workaround to finding microproteins that are otherwise so small they elude capture, it also means that the screened microproteins require further testing to confirm whether they are functional.

    And that’s what the Salk team did next. They picked several of the shortlisted microproteins to test and were able to verify one. Pai hypothesizes this new microprotein, called Adipocyte-smORF-1183, influences lipid droplet formation in fat cells (also known as adipocytes).

    Verification of Adipocyte-smORF-1183 is an exciting step toward identifying more microproteins involved in lipid accumulation and fat cell regulation in obesity. It also verifies that CRISPR is an effective tool for finding microproteins involved in fat cell biology, obesity, and metabolism.

    “That’s the goal of research, right?” says Saghatelian. “You keep going. It’s a constant process of improvement as we establish better technology and better workflows to enhance discovery and, eventually, therapeutic outcomes down the line.”

    Next, the researchers will repeat the study with human fat cells. They also hope their success inspires others to use CRISPR screenings to continue bringing microproteins out from the dark — like Adipocyte-smORF-1183, which until now, was considered an unimportant bit of “junk” DNA.

    Further validation or screening of new cell libraries will expand the list of potential drug candidates, setting the stage for the new-and-improved obesity and metabolic disorder therapeutics of the future.

    Other authors include Hazel Shan, Cynthia Donaldson, Joan Vaughan, Eduardo V. De Souza, Carolyn O’Connor, and Michelle Liem of Salk; and Antonio Pinto and Jolene Diedrich of Scripps Research Institute.

    The work was supported by the National Institutes of Health (F32 DK132927, RC2 DK129961, R01 DK106210, R01 GM102491, RF1 AG086547, NCI Cancer Center P30 014195, S10- OD023689, and S10-OD034268), Ferring Foundation, Clayton Foundation, and Larry and Carol Greenfield Technology Fund.

    Continue Reading

  • ABHI and Pay People collaborate to transform employees’ financial wellness

    ABHI and Pay People collaborate to transform employees’ financial wellness

    KARACHI  –  In a major move to enhance employees’ financial well-being, ABHI, an embedded finance platform, has partnered with PayPeople to provide Earned Wage Access (EWA), enabling employees to access their earned wages instantly. This collaboration merges PayPeople’s reach with ABHI’s expertise in financial wellness, enabling organizations to provide their employees with instant access to their earned wages through Earned Wage Access (EWA).

    Through this partnership, employees will gain real-time access to their salaries, enabling them to manage their financial responsibilities with greater ease and reduce financial stress. By empowering individuals with financial flexibility, businesses can promote a more engaged, productive, and satisfied workforce. This partnership underscores ABHI’s dedication to delivering accessible and impactful financial solutions, empowering businesses to prioritize their workforce’s financial well-being and contribute to a more resilient, future-ready workplace.


    Continue Reading

  • MCML launches upgraded variants of its two most popular vehicles

    MCML launches upgraded variants of its two most popular vehicles

    LAHORE  –  In a notable development for Pakistan’s commercial and family mobility sector, Master Changan Motors Limited (MCML) has launched upgraded variants of its two most popular vehicles, the Karvaan and the Sherpa, signaling a fresh commitment to safety, performance, and practicality.

    At a grand launch event held in Lahore, the company marked a milestone for the Changan Karvaan, which has now crossed 25,000 units sold in Pakistan. First introduced five years ago, the Karvaan has become a leader in its category, commanding over 52% market share in the 7-seater MPV segment. Once considered an under-served category in the automotive landscape, the multipurpose van is now evolving in design, technology, and power.

    The newly unveiled Karvaan Power Plus UG comes with a significant shift under the hood, a more powerful 1.2L engine replacing the previous 1.0L unit, offering 44% more horsepower and improved fuel efficiency. Alongside performance, the company has also focused on safety, introducing dual airbags for both driver and front passenger, ABS with EBD for more controlled braking, and rear parking sensors, all firsts in this category.

    Other enhancements include dual-zone air conditioning with vents for both front and rear rows, a modern infotainment system compatible with Android Auto and Apple CarPlay, reverse camera and spacious in-cabin experience. Cosmetic upgrades such as alloy wheels, smoked headlamps and dual tone interior with wooden trim accents aim to bridge the gap between utility and urban aesthetics.

    Maintaining the company’s vision of introducing value driven products, the company announced the introductory price for Karvaan at Rs 3,299,000. MCML also launched the Sherpa Power 1.2, an upgraded version of its rugged mini truck long favored by small business owners and logistics operators. With the same new 1.2L engine, the Sherpa is now better equipped for heavy-duty work, offering more torque and better long-haul performance across mixed terrains. Known for its durability and load capacity, the new Sherpa continues to target both urban and rural sectors that depend on durable, low-maintenance transport solutions. Speaking at the launch, Danial Malik, CEO of Master Changan Motors, remarked, “Karvaan is not just a vehicle, it’s a movement. It has transformed what people expect from a 7-seater van by bringing dignity, design, and dependability to everyday mobility. Today, we are proud to take this journey further with the upgraded Karvaan UG and the next-generation Sherpa Power 1.2.”

    By upgrading these two models, MCML appears to be doubling down on a segment that is often overlooked but critically important.


    Continue Reading

  • How to craft Copper Lantern in Minecraft | Esports News

    How to craft Copper Lantern in Minecraft | Esports News

    Copper Lantern is one of the new decorative light sources expected to arrive in Minecraft’s 2025 Fall Drop update. Just like the regular lantern, it will be able to serve its reliable role of being a light-emitting block, allowing you to illuminate a place. However, it has its unique charm and offers a distinct look.The Copper Lantern features a beautiful copper frame and displays a soft green-tinted light, which gives the item a vintage feel. This guide explains everything you need to know about crafting the new lantern in Minecraft.

    Steps to craft the Copper Lantern in Minecraft

    Copper Lantern crafting

    Here’s how you can craft the Copper Lantern (Image via Mojang)

    The process of making the Copper Lantern is similar to that of the standard lantern. Basically, you will need a total of two key components to craft it, including the new Copper Nugget. Here’s the full crafting recipe:

    • 8x Copper Nuggets
    • 1x Torch

    The only difference between a regular lantern and the Copper Lantern is that you are using Copper Nuggets instead of Iron Nuggets. 1) Collect the necessary itemsTo begin crafting, you will have to gather the required materials for the Copper Lantern. Here are further specifics:

    • Torch – Craftable using 1x Stick and 1x Coal on a Crafting Table.
    • Copper Nuggets – Obtained by placing a Copper Ingot in the Crafting Table, which splits it into 9 Nuggets. Copper Ingots can be acquired by smelting Raw Copper, which you can mine from Copper Ore.

    2) Place the items in the Crafting TableOnce you have both the items with you, follow this layout in the Crafting Table:

    • Place the Torch in the center slot.
    • Surround it with 8 Copper Nuggets (one in each slot).

    This will yield 1x Copper Lantern in Minecraft.

    Uses of Copper Lanterns in Minecraft

    Just like the regular lantern, the Copper Lantern emits a light level of 15, making it a solid light source to have in Minecraft. The following are some of the key uses of the new item in the game:

    • Decoration – Perfect for decoration, and you can use it in various builds.
    • Lighting – Prevents hostile mobs from spawning.
    • Oxidization Aesthetics – Over time, the copper frame of the lantern can develop a weathered and greenish texture. Although it can be unoxidized, it adds a unique look and appearance.

    You can place the Copper Lantern on top of blocks or hang it from the underside of a block. With this guide, you will now be aware of the detailed crafting process of Copper Lanterns and how you can use the same in Minecraft.


    Continue Reading