Author: admin

  • Gas found in space could help repair damage to Old Masters, say researchers – The Art Newspaper

    Gas found in space could help repair damage to Old Masters, say researchers – The Art Newspaper

    In the early 2000s, conservators at Statens Museum for Kunst (SMK) in Copenhagen discovered that painted white highlights on Old Master drawings—works by Hans Holbein, Abraham Bloemaert, C.W. Eckersberg and others—were darkening at an alarming rate.

    Discoloration of the lead white pigment on drawings, prints and photographs has long vexed conservators. But SMK, which holds one of the world’s largest drawings collections numbering around 18,000 works and dating back to the end of the 15th century, saw an opportunity to understand why it was happening and find ways to address it.

    “When the drawings are in that condition, they cannot be exposed to the public anymore because they lose, in a way, their meaning and their appearance and so they are not suitable for display,” says Gianluca Pastorelli, a conservation scientist at SMK who is part of a multi-decade effort co-led by Niels Borring to research the degradation and find ways to protect the collection.

    The challenge presented a mystery as not every work, even by a single artist in a particular time frame, exhibits discolouring. This darkening is also rarely seen in oil paintings. Of the SMK’s 800 works drawn in chalk, charcoal or pencil and painted with lead white highlights, about half experienced some degree of discoloration. An additional 200 salted prints and lithographs from Copenhagen’s Royal Library have also been affected. 

    Lead white was the pigment of choice for its white color and unique qualities from antiquity onwards, until toxic health effects drove it out of fashion in the 20th century. How it has been sourced and produced has changed over time, but it was historically mixed with binders and brushed onto the works.

    The researchers wondered whether certain artistic methods or materials were making the affected works more vulnerable. Pastorelli’s team used advanced, nondestructive imaging technologies, X-ray fluorescence and X-ray powder diffraction, and microsampling lead isotope analysis to essentially fingerprint the paint compounds, identify material properties and study the chemical makeup of the darkened areas.

    Their research showed that changing manufacturing techniques and the chemical composition of the pigment affected vulnerability to degradation. They also found that chemical reactions were converting lead white to lead sulphide, or galena, a well known metallic-grey mineral. Micro cross sections of damaged areas revealed that it was happening most often at the surface. They deduced that the culprit was airborne sulphur-containing compounds.

    Sulphur is a pollution byproduct— it comes from traffic, industry and human digestive gases—and has risen precipitously since the industrial revolution. At SMK, a move to temporary storage conditions was partially to blame for higher sulphur exposure levels and noticeable darkening. But why was the early 2000s such a turning point?

    “We have to also remember that in 2003 to 2004, we started to have very hot summers and this has an impact on the amount of pollution that is produced from traffic or factories, and also it makes it more challenging for climate control systems inside museums and their filtering system to handle all the variables that must be limited to a certain range,” Pastorelli says. On a simple level, heat and available reactants, such as increased pollution, make chemical reactions happen faster and more readily.

    “We believe lead white darkening is much less frequent on paintings largely because many paintings use oil paints, and oil indeed has a strong protective effect on lead white pigments,” Pastorelli says.

    The quest for a space-age solution

    Conventional treatments include using hydrogen peroxide baths or gels to lighten the darkened highlights, but carry risk of damage to the materials in the work and chemically create a new compound, which is not ideal in conservation. “It is like an old-school cancer drug—it treats, but it also destroys,” says Tomas Markevicius, an art conservator and founder of the Moxy Project research initiative.

    Markevicius and his team, which includes Pastorelli, are studying an entirely new approach to this conservation challenge—the use of atomic oxygen. In May, at a conference in Perugia, he presented the first ever use of atomic oxygen to reverse lead white darkening—without water, acids or contact—on lab mockups. He calls it “a major breakthrough in both chemistry and art conservation”.

    If it sounds space age, that is because it is. Atomic oxygen is a highly reactive form of oxygen found in low Earth orbit that readily interacts with chemicals around it. Nasa scientists Sharon Miller and Bruce Banks researched how this disruptive gas would impact spacecraft exteriors and later studied its use to clean items of cultural heritage.

    Upon contact with atomic oxygen, organic materials like soot, stains and varnishes release into the air. “They get converted to carbon monoxide and carbon dioxide and leave as a gas, so they don’t leave residue on the surface,” Miller says.

    She and Banks were able to test it out on an unlikely contaminant. At a 1997 event organised for the fashion brand Chanel at the Andy Warhol Museum in Pittsburgh, a reckless guest defaced Warhol’s hand-painted Bathtub (1961) with a lipstick-laden kiss. Conventional treatments failed to remove the makeup. Miller and Banks employed an atmospheric oxygen beam for more than five hours that removed the smudge but also some of the underlying grime and a thin paint layer.

    Markevicius’s Moxy group is developing atomic oxygen technology as part of a green cluster in cultural heritage backed by the European Union to develop sustainable technologies. They aim to produce a lab-scale prototype by late 2026 and are rigorously testing mock-ups and physically sensitive materials with varying contaminants to optimise protocols.

    For now, SMK is ensuring that its climate control and air-filtering systems work efficiently, but it is not taking action to lighten the darkened highlights in affected works, because there is currently no safe technique. “It’s a strong driver to develop such technologies that would have the least impact upon these incredibly valuable but also incredibly fragile structures, which are unique,” Markevicius says.

    Continue Reading

  • Association of Poor Sleep Efficiency With Decreased Executive Function and Impaired Episodic Memory in Older Adults

    Association of Poor Sleep Efficiency With Decreased Executive Function and Impaired Episodic Memory in Older Adults


    Continue Reading

  • Former snooker champion Graeme Dott to face child sex abuse trial

    Former snooker champion Graeme Dott to face child sex abuse trial

    Former world snooker champion Graeme Dott is to stand trial charged with child sex abuse.

    The 48-year-old Scot is accused of lewd and libidinous behaviour towards two children between 1993 and 2010.

    The allegations include claims he inappropriately touched a girl, instructed her to remove her clothes and exposed himself to her, as well as molesting a boy, making sexual remarks and watching him shower.

    Dott, who won the world title in 2006, has pled not guilty to the charges.

    A trial has been scheduled for next year.

    Both charges state the alleged incidents occurred on “various occasions”, at addresses in the east end of Glasgow, South Lanarkshire and in a car.

    The case called for a hearing at the High Court in Glasgow, where the attendance of Dott was excused.

    His lawyer Euan Dow told the hearing there was one defence witness listed, but there could potentially be more.

    Mr Dow told the court that his client was not currently ready for trial, but asked for a date to be set.

    Lord Mulholland confirmed a five-day trial would begin on 17 August 2026.

    Dott remains on bail.

    He was suspended by the World Professional Billiards and Snooker Association when the charges were announced earlier this year.

    Continue Reading

  • Save a whopping £1700 on Sony’s multiple Award-winning native 4K projector

    One of the projectors we recommend to serious movie fans looking to setup a proper home cinema has just plummeted in price.

    Specifically, the outstanding Sony VPL-XW5000ES 4K projector is down to £4299 at Richer Sounds. That’s a saving of £1700.

    Having won the prestigious Product of the Year Award in the projectors category for three consecutive years, you know you’re getting the highest level of quality with this Sony product.

    To get this record-low price, you’ll need to be a Richer Sounds VIP Club member. Thankfully, it’s absolutely free, and signing up is quick and easy.

    One key selling point for the VPL-XW5000ES over cheaper projectors is its native 4K resolution and laser lighting.

    Typically, all but the very premium 4K projectors apply ‘pixel shifting’ or ‘double flashing’ technology to native full HD chipsets to create a 4K resolution (or a 4K effect).

    But like Sony’s higher-priced models, the 5000ES actually carries a real 4K 3840×2160 pixel count on its new 0.61-inch SXRD imaging chips. Based on our testing this lets the unit deliver a spectacular picture.

    To the point our in-house testing experts said in our VPL-XW5000ES review that it “redefines projector expectations at its price.”

    Highlights include, phenomenal sharpness, excellent black levels that are deep, rich, and neutrally toned, and beautifully balanced, exceptionally nuanced, and bold but controlled colours across the board. Its motion handling and upscaling are also superb.

    There are a couple of downsides, however. There’s no support for either of the HDR10+ or Dolby Vision advanced HDR systems, which isn’t common in the projector world, to be fair, but is desirable.

    For gamers, two HDMI connections do support 120Hz but not at 4K. You’ll be limited to 1080p if you want higher than 60Hz refresh rates.

    Otherwise, the discounted £4299 at Richer Sounds price (with the VIP Club discount), makes it a fantastic option for any movie fan looking for a stellar value “proper” projector.

    MORE:

    Read our full Sony VPL-XW5000ES review

    The best projectors you can buy, budget to premium

    And check out our Sony Bravia Projector 8 review

    Continue Reading

  • India says international court lacks authority to rule on Pakistan water treaty – Reuters

    1. India says international court lacks authority to rule on Pakistan water treaty  Reuters
    2. After Asim Munir, Bilawal Bhutto’s War Threats, Pakistan’s “Water” Request To India  NDTV
    3. ‘You cannot snatch even a drop from Pakistan,’ PM Shehbaz warns India on restricting water flow  Dawn
    4. Attempt to Block Pakistan’s Water Will Face Decisive Response: PM  ptv.com.pk
    5. India’s hold on Pakistan begins to hurt where it matters  The Economic Times

    Continue Reading

  • Australian scientists target frost losses threatening wheat industry-Xinhua

    CANBERRA, Aug. 14 (Xinhua) — Australian scientists are leading efforts to protect grain growers from frost damage, costing the wheat industry over 360 million Australian dollars (about 235 million U.S. dollars) a year.

    Field trials are being combined with laboratory and controlled-environment studies to develop genetic solutions to frost, according to a statement released Wednesday by Australia’s Charles Sturt University (CSU).

    Crop scientists are studying novel wheat germplasm in fields and frost-simulation chambers to uncover genetic links to frost damage, which has risen by 30 percent in southern Australia since 2000, with climate models predicting both frequency and severity will intensify, it said.

    “This presents a challenge to breeders to improve crop tolerance to stress and for industry to integrate new genetic potential into farming systems to continually adapt to climate change, thus increasing productivity,” said CSU Senior Lecturer in Crop Science Felicity Harris.

    Plants time their growth and flowering to seasonal patterns in temperature, light and moisture, but frost can disrupt this, damaging tissues from cell to canopy and hindering development, researchers said.

    The project, now in its second year of field validation, aims to equip plant breeders with knowledge to develop frost-tolerant varieties and tools to help growers reduce future frost damage, Harris said.

    “In the long run, this will contribute to reducing risk associated with frost and improved crop productivity for Australian farmers,” she said.

    The CSU heads the New South Wales arm of three national research projects funded by the Grains Research and Development Corporation and led by the Commonwealth Scientific and Industrial Research Organization, Australia’s national science agency, with partners nationwide.

    Continue Reading

  • Abu Dhabi Airports Partners with IndiGo to Drive Double-Digit Passenger Traffic Growth, Marking 17th Consecutive Quarter of Expanding Global Connectivity

    Abu Dhabi Airports Partners with IndiGo to Drive Double-Digit Passenger Traffic Growth, Marking 17th Consecutive Quarter of Expanding Global Connectivity

    Published on
    August 14, 2025

    Abu Dhabi Airports has formed a strategic partnership with IndiGo, aiming to fuel double-digit growth in passenger traffic. This collaboration marks the 17th consecutive quarter of expansion, showcasing a steadfast commitment to enhancing global connectivity. The partnership underlines Abu Dhabi Airports’ pivotal role in driving economic growth and reinforcing its standing as a key aviation hub, while also fostering stronger travel ties between the UAE and India.

    Abu Dhabi Airports is reinforcing its role as a major catalyst for economic growth and global connectivity, with double-digit passenger traffic growth for the 17th consecutive quarter, alongside strong increases in flight movements and cargo volume during the first half of 2025. From January 1 to June 30, 2025, the airports in Abu Dhabi hosted over 15.8 million passengers, marking a notable 13.1% year-over-year growth compared to the first half of 2024. Zayed International Airport (AUH) was instrumental in driving this surge, handling 15.5 million passengers, a 13.2% increase from the previous year. The rise in passenger numbers was further supported by 133,533 total flight movements across Abu Dhabi’s five airports, reflecting a 9.2% rise from the same period in 2024. Specifically, AUH recorded 93,858 aircraft movements, showing an 11.4% increase over the previous year’s figure of 84,286 flights.

    As part of its continued expansion, Abu Dhabi Airports remains committed to broadening its global reach. In the first half of 2025, the network introduced 16 new destinations and welcomed additional airline partners. Key developments include China Eastern Airlines’ four-times-weekly service to Shanghai, which will increase to daily flights in September, Air Seychelles’ six weekly flights, and Fly Cham’s newly launched service to Damascus. Additionally, IndiGo expanded its operations at AUH with new routes to Madurai, Bhubaneswar, and Vishakhapatnam, further solidifying AUH as IndiGo’s most connected hub in the UAE.

    Elena Sorlini, Managing Director and Chief Executive Officer at Abu Dhabi Airports, said: “The first six months
    of this year have posed some operational challenges, yet our expectational mid-year results demonstrate the
    resilience of our network and the collaborative partnerships that underpin our growth. Consistently delivering
    positive growth for the past 17 quarters is testament to the dedication and collective effort of the entire Abu
    Dhabi Airports team. It reflects our operational agility and commitment to delivering an exceptional aviation
    experience and attracting international investors. As Abu Dhabi’s tourism and trade prospects rapidly advance,
    our airports are well positioned to support and scale that growth.”

    Abu Dhabi Airports also demonstrated significant growth in cargo operations highlighting the emirate’s
    growing significance in global trade. In the first six months of 2025, it handled a total of 344,795 tonnes of
    cargo year-to-date, bolstered by strategic partnerships and ongoing infrastructure enhancements. These
    include a major joint venture agreement signed with JD Property, the infrastructure arm of China’s e-
    commerce giant JD.com. The agreement will see the development of a state-of-the-art 70,000sqm advanced-
    tech facility that aims to meet the growing east-west demand for e-commerce and specialised cargo logistics
    throughout the GCC and broader MENA region.
    The first half of 2025 was also marked by several strategic milestones across Abu Dhabi Airports’ network. The
    completion of rehabilitation works at Sir Bani Yas Airport reflects the commitment to strengthening the UAE’s
    aviation infrastructure and supporting Al Dhafra Region’s eco-tourism ambitions. AUH was awarded the
    prestigious 3 Pearl Estidama rating for construction and was also recognised as the Best Airport at Arrivals
    Globally for the third consecutive year at the ACI ASQ Awards, further reinforcing its role as a world-class
    gateway to Abu Dhabi. At Al Bateen Executive Airport, Abu Dhabi Airports’ collaboration with Bombardier
    progressed with the establishment of a dedicated service facility, advancing Abu Dhabi’s maintenance, repair,
    and overhaul (MRO) capabilities to position the emirate as a centre of excellence for aviation services. A new
    agreement signed with TAQA Distribution will explore the integration of next-generation utility technologies
    across the airport portfolio, reinforcing Abu Dhabi Airports’ long-term vision for innovation and sustainable
    development.

    Abu Dhabi Airports has teamed up with IndiGo to boost passenger traffic, marking the 17th consecutive quarter of growth. This partnership enhances global connectivity, solidifying Abu Dhabi’s role as a major aviation hub. It highlights the ongoing commitment to expanding international travel ties, especially between the UAE and India.

    Abu Dhabi Airports continues to propel its long-term growth agenda by prioritizing the expansion of international partnerships, progressing key infrastructure initiatives, and fostering sustainable innovations. These efforts solidify its position as a key driver of the UAE’s economic diversification and ambitions for global leadership in aviation.

    Continue Reading

  • TEMSET-24K: Densely Annotated Dataset for Indexing Multipart Endoscopic Videos using Surgical Timeline Segmentation

    TEMSET-24K: Densely Annotated Dataset for Indexing Multipart Endoscopic Videos using Surgical Timeline Segmentation

    Annotation Assessment

    To ensure the consistency of labelling in the dataset, we designed an annotation process involving a team of colorectal cancer surgery specialists, all accredited with fellowship status with the Royal College of Surgeons (RCS, UK). The process began with one surgeon annotating one full video in a shared setting to demonstrate the annotation procedure for the multipart ESV files. Following this, another surgeon logged into the LS server using their credentials and navigated to the project they intended to annotate, accessing the individual video clips for annotation. The LS user interface provided a comma-separated list of phases, tasks, and actions for annotating the timeline of each video clip. Annotations were initially performed by one surgeon and subsequently validated by at least two other surgeons for cross-checking purposes. In cases of conflicting boundaries between the start and end of the labelling triplets, discussions were held to finalize the annotations that was agreed by all surgeons. We employed multifaceted strategies involving our proposed dense taxonomy, collaboratively annotating one full surgery in shared settings, and holding iterative discussions to resolve conflicts to achieve consistent annotations of the complex workflow scenes based on all surgeons’ inputs. The final annotations consisted of labels made up of five phases, 12 tasks, and 21 actions as defined by the proposed taxonomy. These annotations were then programmatically exported from LS in JSON format, along with the corresponding ESV files.

    Deep Learning Model Training

    Data Pre-Processing

    To improve the field of view, irrelevant areas were cropped from ESV images comprising black regions. The input image was first converted to grayscale, and a binary threshold was used to isolate the circular surgical region from the background. This step enhanced the visibility of the surgical scene. Subsequently, the largest contour was identified within the thresholded image and computed its minimum enclosing bounding box. A mask corresponding to this circular region was created and applied to the original image to extract the surgical area while ignoring the background. The bounding box of the surgical region was cropped and this cropped image was resized to its original size using bilinear interpolation. This method ensures that only the relevant surgical view is retained and standardised, facilitating improved visualisation and analysis of the surgical scene.

    Problem Formulation

    A key objective of this study was to learn an unknown function F that maps high-dimensional TEMS endoscopic surgical videos ({bf{X}}in {{mathbb{R}}}^{Ttimes Htimes Wtimes 3}) to a multitarget label triplet Y {Phase, Task, Action}, where, T, H, and W denote the sequence length (no. of frames in the video), height, and width of the frames, respectively. To achieve this, this study proposes a Spatiotemporal Adaptive LSTM Network (STALNet) that learns the desired mapping. As shown in Fig. 5, STALNet integrates a TimeDistributed video encoder ET, followed by an adaptive long-short term memory network (LSTM) module having attention as the last layer MAA-LSTM to capture spatial and temporal dependencies in the ESV data. Let ϕ be the feature extraction function using the backbone. The output of the encoder is given by:

    $${bf{F}}={{bf{E}}}^{T}(phi ({bf{X}}));{bf{X}}in {{mathbb{R}}}^{Btimes Ttimes Ctimes Htimes W},$$

    (1)

    where, B is the batch size, T is the sequence length, C is the number of channels, and H and W are the height and width of the frames, respectively. We experimented with various encoders, including ConvNeXt (convnext_small_in22k)40, SWIN V2 (swinv2_base_window12_192-22k)41, and ViT (vit_small_patch16_224)42,43. These encoders were chosen for their proven ability to capture detailed spatial features across different scales, which is crucial for accurately interpreting surgical video frames. The extracted features are fed into an Adaptive LSTM module. This module consists of multiple LSTM layers, where the number of LSTMs depends on the input sequence length T. Each LSTM processes the sequence of features and produces hidden states. Let ht represent the hidden state at time step t. The hidden states are computed as:

    $${{bf{H}}}_{t}={{bf{M}}}_{{rm{AA}}-{rm{LSTM}}}({{bf{F}}}_{t},{{bf{h}}}_{t-1}),$$

    where ({{bf{H}}}_{t}in {{mathbb{R}}}^{Btimes D}). Multiple LSTM layers were applied to capture temporal dependencies across the sequence. Incorporating LSTMs into the proposed solution in an adaptive manner significantly improved the model’s capacity for surgical scene understanding, as this approach leverages and preserves the temporal coherence in the videos, improving the stability and accuracy of the timeline predictions. The final hidden states from each LSTM layer are collected as ({bf{H}}=[{{bf{H}}}_{1},{{bf{H}}}_{2},ldots ,{{bf{H}}}_{T}]in {{mathbb{R}}}^{Ttimes Btimes D}) and their information across the sequence is aggregated using an attention mechanism. The attention weights are computed by applying a linear layer to the hidden states:

    $${{bf{A}}}_{t}=,{rm{softmax}},({{bf{W}}}_{a}{{bf{H}}}_{t}),$$

    where ({{bf{W}}}_{a}in {{mathbb{R}}}^{Dtimes 1}) is the attention weight matrix. The attention-weighted output is computed as a weighted sum of the hidden states:

    $${bf{O}}=mathop{sum }limits_{t=1}^{T}{{bf{A}}}_{t}{{bf{H}}}_{t}in {{mathbb{R}}}^{Btimes D}.$$

    The final output is obtained by passing the attention-weighted output through a fully connected layer followed by batch normalisation:

    $${bf{Y}}=,{rm{BatchNorm}},({{bf{W}}}_{h}{bf{O}}),$$

    where ({{bf{W}}}_{h}in {{mathbb{R}}}^{Dtimes (P+T+A)}), with P, T, and A representing the number of phases, tasks, and actions, respectively. A technique was employed here for mean ensembling to create more robust learners for each model, followed by heuristic-based prediction correction to address sporadic predictions.

    Fig. 5

    Proposed SpatioTemporal Adaptive LSTM Network (STALNet) for Surgical Timeline Segmentation. This network diagram shows the process by which ESV clips are analysed by encoders in order to apply reliable timeline segments.

    The model is trained using a custom loss function that combines the losses for phase, task, and action predictions. The total loss is given by:

    $${mathscr{L}}=alpha {{mathscr{L}}}_{p}+beta {{mathscr{L}}}_{t}+gamma {{mathscr{L}}}_{a},$$

    where ({{mathscr{L}}}_{p}), ({{mathscr{L}}}_{t}), and ({{mathscr{L}}}_{a}) are the individual losses for phase, task, and action predictions, and α, β, and γ are their respective weights. Each of these losses is computed using the CrossEntropyLossFlat function applied to each of the output triplets.

    DL Model Implementation

    The model described in this paper was implemented using the fastai44 library. A server with 4 NVIDIA LS40 GPUs was used for training and validation. To enhance model convergence, the default ReLU activation function was replaced with the Mish activation function, which demonstrated superior performance in our experiments. Additionally, we substituted the default Adam optimiser with ranger, a combination of RectifiedAdam and the Lookahead optimisation technique, providing more stable and efficient training dynamics. To further optimise the training process, the to_fp16() method was employed to reduce the precision of floating-point operations, thereby enabling half-precision training and improving computational efficiency. The lr_find method was utilised to determine the optimal learning rate for the model, implementing a learning rate slicing technique. This approach assigned higher learning rates to the layers closer to the model head and lower learning rates to the initial layers, facilitating more effective training. For benchmarking, we initially evaluated several network architectures, including a basic image classifier, to establish a trivial baseline. This simple approach, however, produced significant sporadic predictions due to the absence of sequence modelling, highlighting the necessity for a more sophisticated model.

    Model Validation

    The model described in this paper was validated against the human annotator ground truth using the server with NVIDIA LS40 GPUs. We compared the proposed STALNet architecture with various encoder backbones, including ConvNeXt, SWIN V2, and ViT. The output results were analysed against the baseline to look at comparative performance metrics and how they captured the spatiotemporal dependencies that are crucial for the surgical timeline segmentation task.

    Statistical Analysis

    For our model evaluation, we utilised standard metrics including accuracy, F1 score, and ROC (Receiver Operating Characteristic) curves. To illustrate model variability, standard deviation is reported for accuracy and F1 scores. The following equations define these metrics:

    $$begin{array}{ccc}{rm{Accuracy}} & = & frac{TP+TN}{TP+TN+FP+FN}times 100 % ,\ {rm{Precision}} & = & frac{TP}{TP+FP},\ {rm{Recall}} & = & frac{TP}{TP+FN},\ {rm{F1; Score}} & = & 2cdot frac{{rm{Precision}}cdot {rm{Recall}}}{{rm{Precision}}+{rm{Recall}}}.end{array}$$

    (2)

    We computed these statistics at two levels: 1) Overall Model Performance: We reported the overall accuracy and F1 score on the entire validation set. 2) Class-Specific Performance: These metrics were computed for each taxonomy triplet class (phase, task, and action) to identify which classes the model struggles with the most. Additionally, ROC curves were used to visually investigate model performance. True positives (TP), false positives (FP), true negatives (TN), and false negatives (FN) were derived from the predictions, which were then used to compute precision and recall, leading to the construction of ROC curves plotted using Scikit-learn. To enhance our analysis, we implemented custom visualisations showing video clips, target labels, and model predictions. We employed color coding (red for incorrect and green for correct predictions) for easy interpretation. All data and model results were visualised and analysed using Matplotlib, NumPy, and Scikit-learn.

    Model Performance Evaluation

    Table 1 presents the accuracy and F1 scores for each model across the three encoder architectures. The baseline image classification learner, which predicts timeline labels based solely on individual images, achieved an F1 score of 72.99% with the ConvNeXt encoder, 66.7% with the SWIN V2 encoder, and 60.87% with the ViT encoder. These results indicate the fundamental capability of deep learning models for surgical timeline segmentation but also highlight the limitations of relying solely on spatial information. In contrast, our proposed STALNet demonstrated significant performance improvements over the baseline model. On average, STALNet achieved an F1 score of 82.78% and an accuracy of 91.69%, reflecting an average performance gain of 9.79% in F1 score and 11.38% in accuracy compared to the baseline model. These improvements underscore the importance of incorporating spatiotemporal information for surgical timeline segmentation. Furthermore, the performance varied between different model encoders used in the time-distributed layer for feature extraction. Among the evaluated encoders, the ConvNeXt encoder achieved the highest accuracy with 91.69%, slightly better than the SWIN V2 encoder at 91.41%. However, the highest performing F1 score, which is a significant metric for evaluating timeline segmentation, was achieved by the SWIN V2 encoder at 86.02%, which is approximately 3.24% higher than the ConvNeXt encoder’s F1 score of 82.78%. This demonstrates that while ConvNeXt offers marginally better accuracy, SWIN V2 excels in terms of F1 score, highlighting its superior performance in capturing relevant features for timeline segmentation. Despite the higher F1 score of SWIN V2, it required substantial computation during both training and deployment phases. On the other hand, ConvNeXt not only delivered competitive performance but also offered a more computationally efficient solution, making it a practical choice for real-world applications. Overall, the STALNet model, particularly with the ConvNeXt encoder, demonstrated superior performance in segmenting surgical timelines. This highlights the efficacy of integrating spatiotemporal features and selecting robust encoder architectures to balance performance and computational efficiency.

    Table 1 Comparison of Surgical Timeline Segmentation Models.

    The STALNet model was also evaluated for its performance on each of the taxonomy triplets (phase, task, action) as shown in Tables 2, 3, and 4, respectively. The evaluation of phase segmentation reveals that the model performs exceptionally well across all phases, with only minor fluctuations in performance using different encoders. The ROC curves show its efficacy across these triplet behaviours (see Fig. 6). For example, the “Dissection” phase achieved an F1 score of 99.0% with no variance and an accuracy of 99.0% with a variance of 11.0% with the SWIN V2 encoder. Similarly, the “Setup” phase showed high performance with an F1 score of 98.0% and an accuracy of 99.0%, both exhibiting low variances (1% and 9%, respectively with ConvNeXt and SWIN V2 encoders). Even the “Closure” phase, despite being one of the more challenging phases due to its fewer instances, maintained an F1 score and accuracy of 100% for both with variances of 0% and 5%, respectively with the SWIN V2 encoder. These results indicate that the model effectively captures and segments the different phases consistently across three distinct encoders. In task segmentation, the model showed strong and consistent performance across most tasks. For instance, tasks such as “Longitudinal Muscle Dissection” and “Suturing” achieved high F1 scores of 99% for each, with accuracies of 100% and 99%, and low variances (1% and 0%, and 7% and 8%, respectively) with the ConvNeXt encoder. This consistency reflects the model’s robust ability to segment tasks accurately. Conversely, the “Site” task, which had a significantly lower F1 score of 67% with high variance 33% with the ConvNeXt encoder. This indicates that the model struggles more with tasks that are less frequently represented in the dataset. For action segmentation, the model demonstrated high performance on frequently occurring actions such as “Scope Insertion” and “Stitching” achieving F1 scores of 99% and 95%, and accuracies of 100% and 98%, respectively with the ConvNeXt encoder. The variances for “Scope Insertion” were 1% for the F1 score and 3% for accuracy, while “Stitching” had variances of 4% and 15%, indicating stable and reliable performance. However, actions like “Debris Wash” and “Haemostatis,” which had lower F1 scores of 50% for each, also exhibited higher variances 50% for each of the above actions with the ConvNeXt encoder. These findings suggest that the model’s performance is consistent for well-represented actions, but struggles with less frequent actions.

    Table 2 Performance of the STALNet model on Surgical Phases across different encoders.
    Table 3 Performance of the STALNet model on Surgical Tasks across different encoders.
    Table 4 Performance of the STALNet model on Surgical Actions across different encoders.
    Fig. 6
    figure 6

    STALNet Performance Review using ROC Curves for Taxonomy Triplets. The top row of ROC curves shows the performance of ConvNeXt, ViT and SWIN V2 encoders on labelling high level TEMS surgical “Phases”. The next two rows show the performance of STALNet encoders on labelling TEMS surgical “Tasks” (intermediate level) and “Actions” (the fine level).

    In summary, our technical validation is deliberately structured to demonstrate the effectiveness of STALNet’s multi-target modelling strategy, which offers superior performance and semantic consistency compared to flat single-label approaches. In early experiments, we trained STALNet as a single-label classifier across all 84 triplet combinations. This unitarget formulation consistently plateaued at  ~ 72% accuracy and struggled to model the underlying dependencies between triplet components. While it did not produce invalid triplets—since each output class was predefined—it lacked interpretability and failed to generalise well to complex surgical workflows.

    We also explored a multi-head architecture without tailored loss weighting. This improved expressiveness but still resulted in clinically implausible combinations, as the model lacked guided supervision to respect the hierarchical structure between phases, tasks, and actions. Our final multi-target approach, with three prediction heads and tailored loss functions for each triplet component, enabled the model to learn semantic relationships across components. This design achieved up to 91.7% accuracy and 86.0% F1 score on individual elements (see Tables 1 to 4), while effectively avoiding unrealistic triplet outputs by learning their internal structure. Although the results are shown in separate tables for interpretability, they originate from a single, unified model trained jointly with a triplet-aware loss.

    The results confirm that the STALNet model with the ConvNeXt encoder performs well and consistently across phases, tasks, and actions with sufficient training data, as evidenced by low variance in well-represented classes. However, as the number of classes increases—from five phases to 11 tasks to 21 actions—the modelling task becomes more challenging, leading to higher variance and lower performance for less frequent classes. This trend underscores the complexity of handling a larger number of classes and highlights the need to address class imbalance. Techniques such as weighted dataloaders and customised loss functions can mitigate these issues, improving the model’s robustness and performance across all categories.

    The results also illustrate the model’s superior capabilities in capturing the nuances of surgical workflows. The ROC curves highlights that the Swin V2 encoder outperforms other encoders in terms of accuracy and F1 score. The model’s output is visually depicted in an infographic in Fig. 7. This shows the input video clips with predicted and actual taxonomy triplet labels from a batch. This visualisation clearly demonstrates the trends discussed in the performance tables and ROC curves, providing a comprehensive understanding of the model’s efficacy in real-world scenarios.

    Fig. 7
    figure 7

    STALNet: Batch of results for visual inspection. This figure illustrates the output of the STALNet model compared to human annotations—the ground truth (GT). Each tile displays the first, middle, and last frames of a video clip, along with predictions and GT for each taxonomy triplet (Phase, Task, Action) at the top. Green font indicates agreement with the GT, while red font indicates disagreement. In this example, there is widespread agreement except for one microclip where the model predicted the action “retraction” instead of “dissection” as labeled by the human annotators.

    The focus of this study was to provide a high-fidelity resource that enables the development of AI models for accurate surgical video indexing, such as our proposed STALNet architecture. While the objective is not to directly evaluate models for upstream tasks like surgical skill assessment—which require deeper reasoning and semantic understanding—this foundational work is essential for enabling scalable retrospective video analysis and supporting future clinical applications. To support this, the structured phase-task-action triplet taxonomy was co-designed with a panel of expert colorectal surgeons, aiming not only to capture workflow granularity but also to embed clinically meaningful signals that could potentially serve as proxies for surgical competence. For example, metrics derived from such factors as the frequency of intraoperative adverse events (e.g., bleeding), the length of inactive periods (“no action”), or the volatility of phase transitions—could, in future studies, be investigated as indicators of procedural fluency or surgeon expertise. These hypotheses are particularly relevant for distinguishing between experienced and novice operators, as variability in temporal workflow progression may indeed reflect differences in training or technical confidence.

    Continue Reading

  • Pakistan blocks over 500 terrorist-linked social media accounts

    Pakistan blocks over 500 terrorist-linked social media accounts

    The Government of Pakistan has blocked over 850 social media accounts linked to banned militant groups after reporting them on different platforms in a nationwide crackdown. 

    The accounts, allegedly linked to proscribed groups such as Tehreek-e-Taliban Pakistan (TTP), Baloch Liberation Army (BLA) and Baloch Liberation Front (BLF) — all banned by the United Nations, United States and the United Kingdom as well — had a combined following of over two million.

    The newly formed National Cyber Crime Investigation Agency (NCCIA) and Pakistan Telecommunication Authority (PTA) coordinated the action, reporting accounts across Facebook, Instagram, TikTok, X (formerly Twitter), Telegram and WhatsApp.

    Read: New Pak-US front against terror trains sights on BLA, TTP

    Federal IT Minister Shaza Fatima Khawaja secured direct cooperation from Telegram officials as well despite the app being banned in Pakistan.

    The government has also appealed for international cooperation to prevent extremist propaganda online.

    Facebook and TikTok acted on over 90% of the removal requests, while X and WhatsApp showed 30% compliance.

    Officials warned that while mainstream Pakistani media remains free of extremist content, militant groups are still using online platforms for recruitment and incitement.

    Islamabad urged global platforms to permanently block terrorist-linked accounts, deploy AI-based removal systems, and maintain direct contact with Pakistani authorities.

     

    Continue Reading

  • Second contestant edited out of Gregg Wallace and John Torode’s final series

    Second contestant edited out of Gregg Wallace and John Torode’s final series

    A second MasterChef contestant has been edited out of this year’s scandal-hit series, BBC News can exclusively reveal.

    A spokesperson for the show’s production company, Banijay, said: “One other contributor decided that given recent events they would like not to be included. We have of course accepted their wishes and edited them out of the show.”

    Another contestant, Sarah Shafi, was also removed from the series after asking for it not to be broadcast, following a report which upheld claims against hosts Gregg Wallace and John Torode.

    The BBC decided to still show this year’s amateur series, which was filmed before the pair were sacked, saying it was “the right thing to do” for the chefs who took part.

    But it faced a backlash from some women who came forward, while the broadcast union Bectu said bad behaviour “should not be rewarded with prime-time coverage”.

    Former Celebrity MasterChef contestant and BBC journalist Kirsty Wark also suggested the BBC could have refilmed the series without the two co-hosts.

    In the event, both Wallace and Torode remain in the series, which began last week on BBC One and on iPlayer.

    But the episodes appear to have been edited to include fewer jokes than usual, with less chat between them and the chefs.

    The episode which would have featured the second contestant was broadcast on BBC One on Wednesday night, but only featured five chefs rather than the usual six.

    BBC News understands the individual has asked not to be identified and they will not feature in the show.

    It’s believed Shafi’s episode has not yet aired.

    The BBC previously said it had not been “an easy decision” to run the series, adding that there was “widespread support” among the chefs for it going ahead.

    “In showing the series, which was filmed last year, it in no way diminishes our view of the seriousness of the upheld findings against both presenters,” it said.

    “However, we believe that broadcasting this series is the right thing to do for these cooks who have given so much to the process. We want them to be properly recognised and give the audience the choice to watch the series.”

    Continue Reading