Author: admin

  • Top 10 changes coming with Cyberpunk 2077 2.3 Update | Esports News

    Top 10 changes coming with Cyberpunk 2077 2.3 Update | Esports News

    (Image via YouTube/Cyberpunk 2077)

    CD Projekt Red’s latest update for Cyberpunk 2077 is here. The update is packed with some exciting new features. Update 2.3 has introduced some major customization, vehicles, and immersion improvements, right alongside technical upgrades for smoother gameplay. Here is the breakdown of the biggest changes that are coming to Night City.

    Top 10 changes in Cyberpunk 2077 update 2.3

    Landing on July 17, 2025, on PlayStation 5, Xbox Series X|S and PC, the update delivers a significant package. While it’s not a massive narrative expansion, it does inject fresh life into highly requested features, which are aimed at making navigation of the dystopian metropolis more stylish and smoother than ever before.

    REDstreams — Update 2.3 Overview

    AutoDrive: For a hands-free experience

    AutoDrive feature Cyberpunk 2077 the most anticipated addition with the update. AutoDrive allows players to sit back while the car navigates Night City. Just set the destination or enable the free-roam mode. The vehicle would follow traffic rules, avoid collisions and even stop at the red lights. The feature disengages during the combat to ensure you remain in control as things start to get heated up.

    Cinematic cruising camera

    Alongside AutoDrive is a paired cinematic cruising camera mode that dynamically cycles through all cinematic angles as the car moves. It captures stunning vistas. It allows players to create their own Night City music videos, as the car drives itself.

    Delamin Taxi Service

    This feature can be unlocked by completing the ” Don’t Lose Your Mind quest. The feature allows players to summon the Delamin cab through the vehicle call menu. Just input the destination and sit back to enjoy the ride. It’s a seamless way to explore around.

    Four new vehicles

    Update 2.3 adds 3 new cars and a motorcycle. Each of these are coming with new vehicles Cyberpunk 2077 2.3 update, and they are tied to the unique side jobs.

    • Yaiba ARV-Q340 Semimaru- can be unlocked by completing The Beast in Me and The Hunt quests.
    • Rayfield Caliburn “Mordred”- can be unlocked by completing The Beast in Me: Badlands and The Beast in Me: Santo Domingo, or Search and Destroy.
    • Yaiba ASM-R250 Muramasa- unlocked through purchasing 1 Yaiba vehicle. Needs 3 vehicles to be purchased via Autofixer.
    • Chevillon Legatus 450 Aquila- can be purchased directly through Autofixer.

    Expanded vehicle customization

    TwinTone and CystalCoat now support a total of 32 vehicles, including the motorcycles. The hacked version allows lower-end cars to get a fresh paint job. There are over 370 new paint schemes—168 unique and 205 generics. Use vehicle customization Cyberpunk 2077 to make sure your ride stands out in Night City.Note: Partner vehicles and motorcycles can now use CrystalCoat.

    Photo mode is revolutionized

    Capture the Night City like you never did before with Photo mode in Cyberpunk 2077. Spawn the 27 new NPCs, including some fan favourites like Cassel twins and Brendan. You can even change character outfits—NPCs and V or alter time and weather instantly. Apart from it, use it to skip frames to have perfect action shots. Use the powerful new Look-At system and the craft custom filters along with advanced color balance tools. With it all, players now have more creative freedom in Cyberpunk 2077 than before.

    Everything That’s New in Cyberpunk 2077 Update 2.3

    Mac debut and cross-progression

    Cyberpunk 2077 has finally and officially arrived on “Mac with Apple silicon”. The Mac version Cyberpunk 2077 via Mac App Store, Epic Games Store and Steam. The existing PC owners on the supported platforms will have free access. It would feature frame generation, ray tracing, HDR tuned for the Apple XDR displays and frame generation too.

    AMD FSR 3.1 and Intel XeSS 2.0 support

    The PC players would benefit from the AMD FSR 3.1 frame generation, apart from Intel XeSS 2.0. It will improve visuals and performance. FSR 4 support will arrive later with the AMD driver update.

    HDR10+ and Console VRR upgrades

    HDR10+ Gaming is used for compatible displays. Apart from it, VRR support is now available on consoles. It will reduce screen tears and will deliver smoother gameplay on Xbox Series X|S and PlayStation 5.

    Quality of life improvements

    • There is better HDR calibration.
    • Cyberware Capacity Shard drops are fixed.
    • Inverted mouse prompt adjustments are possible.
    • In Settings, there are Replayable tutorials.

    Cyberpunk 2077 update 2.3 is more immersive than it’s ever been. Whether you are customizing cars, letting AutoDrive handle wheels or snapping pictures, the new patch arriving on July 17, 2025, will change the gaming experience. Enjoy on PlayStation 5, Xbox Series X|S or PC, as you hit the streets.


    Continue Reading

  • Epic Games Forces Fortnite Cheaters to Post Public Apologies, Once Again

    Epic Games Forces Fortnite Cheaters to Post Public Apologies, Once Again

    In a bold new step against online cheating, Epic Games is now making Fortnite cheaters issue public apologies as part of legal settlements. The move is part of the company’s broader effort to protect the integrity of the game and send a message to those who try to gain unfair advantages.

    We took legal action against two people who cheated and broke our rules:

    One sold and used cheats and the other carried out cyber attacks on content creators who were livestreaming gameplay (aka: DDoS attacks). Both have been ordered to stop these activities and are banned from…

    — Epic Games Newsroom (@EpicNewsroom) July 14, 2025

    According to a recent report by The Express Tribune, players who were caught using illegal software to cheat in Fortnite were not only banned but also legally compelled to admit guilt. These cheaters, mostly based in the U.S. and Europe, signed court-enforced agreements that included statements like:

    “I regret cheating in Fortnite and will not do it again.”

    These apologies have been posted publicly online. Sometimes these apologies also appear on forums such as Reddit or personal websites, making the embarrassment part of the punishment.

    Legal Action Against Fortnite Cheaters Isn’t New for Epic

    Epic Games has a history of taking legal action against cheaters and cheat developers. The company previously filed lawsuits against multiple individuals who created or sold aimbots, wall hacks, and other exploits for Fortnite. In the past, some cases have ended with heavy fines, while others required the offenders to promise never to cheat again or face future legal consequences.

    The public apology approach is a new twist that combines legal deterrence with public accountability.

    Why Epic Is Doing This

    With over 400 million registered players and a massive in-game economy, cheating in Fortnite can ruin competitive balance and damage the experience for millions. By forcing cheaters to publicly own up to their actions, Epic hopes to:

    A spokesperson for Epic Games said the company is “committed to fair play” and will continue pursuing cheaters legally when necessary.


    Continue Reading

  • Argentina awaits the Barça Legends

    Argentina awaits the Barça Legends

    On Saturday 19 July at 4pm local time, (9pm CEST), Barça Legends take to the field at the Mas Monumental stadium in Buenos Aires to play theirs first ever game in Argentina. Their opponents will be the River Plate Leyendas team, and  the game can be seen live exclusively on Barça’s official YouTube channel. 

    The game was to take place a fortnight ago but the expectation in Argentina is great ahead of the game and ticket sales are going well as no-one wants to miss the game between the veterans of two of world football’s great clubs. 

    Full agenda 

    Beyond Saturday’s game, the blaugranes have a busy agenda during their stay in Argentina. On Friday there is an open training session for the fans which follows a football clinic at the River Plate facilities for disadvantaged children. Furthermore, thanks to the efforts of the River Plate Foundation, there will also be a conference involving Gaizka Mendieta and Juan Pablo Sorín from the Barça squad and Pablo Aimar and Marcelo Barovero from the River Plate team in which the players will share some of their greatest memories from their careers and what they have learned in terms of leadership both on and off the field. 

    The team will also the Generalitat de Catalunya delegation based in Argentina, and on match the Argentina Barça Supporters’ Club will get a chance to meet the Barça players. 

    Two squads for the game 

    Coach Albert ‘Chapi’ Ferrer has named the following players for the game: Jesús Angoy, Vítor Baía, Samuel Okunowo, Sergi Barjuan, Marc Valiente, Frank de Boer, Frédéric Déhu, Philippe Christanval, Fernando Navarro, Juan Pablo Sorín, Phillip Cocu, Gerard López, Ronald de Boer, Gaizka Mendieta,  Edgar Davids, José Edmílson, Giovanni, Jonatan Soriano and Nolito. 

    River Plate: Marcelo Barovero, Matías Giordano, Cristian Tula, Ariel Franco, Jonathan Maidana, Matías Abelairas, Walter Acevedo, Leonardo Ponzio, Nicolás Domingo, Guillermo Pereira, Alejandro Domínguez, Pablo Aimar, Ariel Ortega, Rodrigo Mora, Leonardo Caruso and Fernando Cavenagui.

    Continue Reading

  • West Indies all-rounder Russell to retire from international cricket – Sports

    West Indies all-rounder Russell to retire from international cricket – Sports

    Two-time Twenty20 World Cup winner Andre Russell will retire from international cricket at age 37 after the second T20 match against Australia on July 22 in his hometown of Kingston, Jamaica, Cricket West Indies (CWI) said on Wednesday.

    All-rounder Russell, who won the T20 World Cup in 2012 and 2016, has earned 84 international caps in the format, scoring three fifties and taking 61 wickets.

    The white-ball specialist, who played only one test match, also appeared in 56 One-Day Internationals (ODI), taking 70 wickets. He last played in the 50-over format in 2019.

    “Words cannot explain what it meant. To represent the West Indies has been one of the proudest achievements in my life,” Russell said in a statement.

    Stokes’ long bowling spells v India a great sign for England, says Root

    “When I was a kid, I did not expect to get to this level, but the more you start to play and get to love the sport, you realize what you can achieve.

    This inspired me to become better because I wanted to leave a mark in the maroon colours and become an inspiration to others.“

    Russell, who travels around the world competing in T20 leagues and most recently appeared in Major League Cricket in the U.S. this month, said he wanted to finish his international career on a high.

    “His hunger to perform and win for West Indies has never wavered. I wish him all the best on his next chapter, and I hope he continues to inspire generations to come,” West Indies coach Daren Sammy said.

    West Indies host Australia in the first T20 of the five-match series on Sunday in Kingston. Australia won their test series 3-0.

    Continue Reading

  • Former Pakistan cricketer slams Ravindra Jadeja for lacking intent during chase in Lord’s Test – Firstpost

    Former Pakistan cricketer slams Ravindra Jadeja for lacking intent during chase in Lord’s Test – Firstpost

    Indian all-rounder Ravindra Jadeja’s batting has been questioned by a former Pakistan cricketer for lacking intent as he said that India could have won the Lord’s Test if they were more proactive.

    read more

    Former Pakistan cricketer Kamran Akmal has questioned Ravindra Jadeja’s cautious batting, holding it responsible for India’s
    heartbreaking loss to England in the third Test at Lord’s. India needed just 193 runs in the final innings to clinch the match and take a 2-1 lead in the five-match series. However, Jofra Archer and Ben Stokes took three wickets each to turn the game on its head as India were reduced to 112/8 on Day 5.

    STORY CONTINUES BELOW THIS AD

    Nonetheless, a valiant knock from Ravindra Jadeja and some stubborn batting from tailenders Jasprit Bumrah and Mohammed Siraj gave India a glimmer of hope before it was dashed in the most heart-wrenching manner. Jadeja and Bumrah added 35 runs in 22 overs, while the all-rounder stitched a thrilling 23-run stand with Siraj in little over 13.2 overs as India started to dream about a memorable win at Lord’s.

    Also Read |
    Sunil Gavaskar highlights striking factors that led to India’s defeat at Lord’s

    The dream was crushed
    most cruelly as the ball rolled down to hit the stumps after Siraj defended a Shoaib Bashir delivery.

    Akmal blasts Jadeja’s batting in Lord’s Test

    Analysing India’s narrow loss, former Pakistan wicketkeeper-batter Kamran Akmal accused India of lacking intent during the chase and said that England would have completed the small target in less than 30 overs. The Indian innings lasted for 74.5 before they were bowled out on 170.

    “If they had tried, runs would have come. Against this bowling, they scored so many runs in the first two Tests, so 192 (193) was not such a difficult target,” Akmal said on YouTube channel The Game Plan. “The pitch was a bit difficult, but the uneven bounce that was there during England’s innings wasn’t there during India’s chase. England would have completed the chase by lunch.

    “In batting, the planning was not right. When Jadeja was getting to play the final two balls of the over and the fielders were up, he should have taken his chances. He wanted to take the match deep. No batter tried to make runs.”

    Also Read |
    Mohammed Shami exposes Indian batters’ mistakes in Lord’s Test heartbreak vs England

    Akmal was specifically critical of Jadeja, who scored 61 not out of 181 balls at a strike rate of 33.7, as he felt Jadeja needed to be more aggressive in his approach.

    “Bumrah and Siraj defended very well and provided partnerships. But the balls that Jaddu got. The runs that he could have scored, had he got them, India could have won by Tea,” he added.

    STORY CONTINUES BELOW THIS AD

    India will aim to level the series when they face England in the fourth Test at the Old Trafford Cricket Ground from 23 July.

    Continue Reading

  • Transformers At The Edge: Efficient LLM Deployment

    Transformers At The Edge: Efficient LLM Deployment

    Since the groundbreaking 2017 publication of “Attention Is All You Need,” the transformer architecture has fundamentally reshaped artificial intelligence research and development. This innovation laid the foundation for Large Language Models (LLMs) and Video Language Models (VLMs), fueling a wave of productization across the industry. A defining milestone was the public launch of ChatGPT in November 2022, which brought transformer-powered AI into mainstream use. Since then, LLMs have enabled a broad spectrum of applications, from conversational agents to advancements in medical research.

    However, running these LLMs efficiently presents substantial challenges, particularly on edge computing devices and legacy hardware architectures that were designed before the widespread adoption of large language models.

    One of the significant difficulties facing AI processors is the sheer size of LLMs compared to prior state-of-the-art CNNs (Convolutional Neural Networks), RNNs (Recurrent Neural Networks), and other network types.

    Among these CNNs and RNNs, an 85 million-parameter model would have been considered large. In comparison, even a modestly sized LLM might have 1B parameters, while models with 8B parameters, and larger, are commonplace. Said plainly, there is no mass market, cost-effective method to load that many parameters on a single chip; thus, pre-existing solutions may not be effective.

    Consider Llama 3.2, Meta’s latest generation of LLMs, which introduced significant advancements in both text and multimodal (text + vision) AI capabilities. This release expands on previous versions with new model variants and features designed for both enterprise and edge-device deployment. Llama 3.2 contains 1 billion parameters. Furthermore, attention operations (O) increase with context size (n). During prefill, where operations are predominantly done in parallel, the compute load is a function of the square of the context size, O(n2). The prefill phase is compute-bound, meaning its speed is limited by the raw computational power of the hardware. In contrast, decode is predominantly sequential, and the compute load is an order of magnitude smaller, O(n)*n per token. However, the decode phase is dominated by memory access speed rather than compute power, and the per-token compute cost can be orders of magnitude higher than during prefill.

    Exploring LLM inference flow

    The LLM inference flow begins with a user prompt, which is a sentence spoken or input by the user. In Figure 1 below, we use the example of “May the force.” The user prompt is first translated into what is called a token, a mathematical representation of the user prompt, using a processing mechanism aptly named the “tokenizer.” The token is then sent to the inference processing steps, where they are divided into two phases: the prefill phase and the decode phase.

    In the prefill phase, all the tokens are sent at once through a series of transformer blocks. At each transformer layer, the Key (K) and Value (V) vectors for each token are stored in a cache — this is called the KV cache or attention cache. This cache is then used to make subsequent generations faster. In the decode phase, one response at a time is generated, as this part is sequential in nature.

    In Figure 1, examples of this are the generation of “be, with” and “you”.

    Fig. 1: LLM inference flow.

    During the prefill stage, the model needs to compute attention over all previous tokens. However, during the decode stage, the processor only needs to compute attention over the new token because it reuses the cached values from prefill. This enables efficient autoregressive decoding — the processor doesn’t need to recompute everything from scratch each time. The prefill stage is compute-heavy because it processes the entire input prompt.

    Compute cost scales linearly with prompt length – if the prompt is N tokens and the model has L layers, the processor must do N × L full forward passes. However, the decode stage is relatively fast after that because the system processes only one token at a time, reusing the cache. This raises an issue, though – as the phases are very different in compute, memory, and power, how can a solution be enabled which is optimal for both?

    LLM inference runtime

    Fig. 2: LLM inference runtime.

    Let’s also examine the runtime changes between traditional AI networks and LLMs, which are illustrated in Figure 2.

    Traditional CNNs have a simple, monolithic runtime with only two phases:

    • Data loading phase
    • Inference phase

    LLMs introduce a multi-phase runtime system with five distinct phases, each with different computational and memory requirements.

    Prefill Phase

    • Processes the initial user prompt by embedding and tokenizing the entire input sequence
    • Runs all transformer layers in full sequence mode (dense computation)
    • Initializes and populates the Key-Value (KV) cache with attention values
    • Generally has higher per-token latency due to sequence processing
    • May use microbatching for lengthy inputs

    Decode Phase

    • Generates output tokens one at a time autoregressively
    • Only processes the last generated token per step
    • Retrieves past tokens from the KV cache for efficiency
    • Computes self-attention only against past tokens
    • Highly optimized with KV caching and batching

    Inactive Phase

    • No computation occurs, but the sequence remains “alive” in memory
    • Occurs when waiting for new user input in streaming/chat interfaces
    • KV cache remains in memory (costly resource usage)
    • Can become a bottleneck in high-throughput systems with many cached sequences

    Follow-up Prefill

    • Triggered when new user input is appended to a partially generated sequence (multi-turn conversations)
    • Processes new input as a short prefill segment, appending to existing cached context
    • Updates KV cache with new tokens
    • Distinct from initial prefill as it operates on shorter, appended segments

    Retired Phase

    • Sequence is terminated and removed from the active pool
    • KV cache is freed and resources are released
    • Triggered by conversation completion, user cancellation, or timeouts
    • Frees memory and scheduling capacity for other sequences

    This multi-phase complexity significantly exacerbates deployment difficulties for LLMs compared to traditional AI networks. Managing multiple phases simultaneously, maintaining expensive KV caches, and handling dynamic transitions between phases create substantial challenges for efficient LLM deployment and resource management.

    New architectures for LLMs

    Large Language Models introduce distinct challenges for inference hardware and software systems, including complex model architectures, demanding runtime computations, transformer-specific operations, and implementation considerations. To address these requirements, AI processing platforms must advance to accommodate diverse data representations, support multiple precision formats, deliver enhanced computational throughput, and enable efficient multi-core processing architectures.

    Expedera explores this further in our white paper at: https://www.expedera.com/transformers-at-the-edge/

    Continue Reading

  • The forgotten FOSS phone OS • The Register

    The forgotten FOSS phone OS • The Register

    The result of the pioneering joint Psion and Nokia smartphone effort is still out there on GitHub.

    Smartphones are everywhere. They are entirely commoditized now. Most of them run Android, which uses the Linux kernel. The rest run Apple’s iOS, which uses the same XNU kernel as macOS. As we’ve said before, they’re not Unix-like, they really are Unix™.

    There have been a bunch of others. BlackBerry tried hard with BB10, but even a decade ago, it was over. It was based on QNX and Qt, and both of those are doing fine. We reported last year that QNX 8 is free to use again. Palm’s WebOS ended up with HP and now runs in LG smart TVs – but it’s Linux underneath.

    The most radical, though, was probably Symbian. The Register covered it at length back in the day, notably the epic Psion: the Last Computer feature, followed by the two-part Symbian, The Secret History, and Symbian UI Wars features.

    Built from scratch in the late 1990s in the then-relatively new C++, it evolved into a real-time microkernel OS for handhelds, with the radical EKA2 microkernel designed by Dennis May and documented in detail in the book Symbian OS Internals. There’s also The Symbian OS Architecture Sourcebook [PDF]. An official version of the source code is on GitHub, and other copies are out there.

    We liked this description from CHERI Project boffin David Chisnall:

    Before Nokia was assimilated and digested by Microsoft, it open sourced the OS, and despite some licensing concerns, it’s still there.

    It strikes this vulture as odd that while work continues on some ground-up FOSS OS projects in C++, such as the Genode OS, or Serenity OS, which we looked at in 2022,the more complete Symbian, which shipped on millions of devices and for a while had a thriving third-party application market, languishes ignored.

    (Incidentally, the Serenity OS project lead has moved on to the independent Ladybird browser, which we looked at in 2023. Work on the OS continues, now community-led.)

    Symbian’s progenitor, Psion EPOC32, predates much of the standardization of C++ – much as BeOS did. We’ve seen comments that it was not easy to program, but tools such as P.I.P.S. made it easier. Nokia wasted vast effort on multiple incompatible UIs, which have been blamed for tearing Symbian apart, but none of that matters now: adapt some existing FOSS stuff, and forget backwards compatibility. Relatively few of the apps were FOSS, and who needs touchscreen phone apps on a Raspberry Pi anyway? Qt would be ideal – it’s a native C++ tool too.

    Fans of all manner of 20th century proprietary OSes from AmigaOS to OS/2 bemoan that these never went open source. Some of BeOS made it into PalmOS Cobalt but that sank. Palm even mulled basing an Arm version of PalmOS on Symbian, but the deal fell through.

    Some of those OSes have been rebuilt from scratch, including AmigaOS as AROS and BeOS as Haiku. But they run on Intel. Neither runs natively on Arm, and yet Symbian sits there ignored. Sometimes you can’t even give the good stuff away. ®

    Continue Reading

  • Samsung may have revealed name of its tri-folding phone

    Samsung may have revealed name of its tri-folding phone

    Samsung, during the Galaxy Unpacked event in January 2025, where it launched the Galaxy S25 series, hinted that it is working on a smartphone with a tri-folding display. A few months later, an animation was discovered in the code of the brand’s Android 16-based One UI 8.0 which gave us a sneak peek at the form-factor and folding mechanism of the company’s upcoming tri-folding Galaxy phone, confirming its existence and hinting at an imminent launch. 

    The teaser for the last Galaxy Unpacked contained a hint that Samsung may launch a tri-folding phone alongside the Galaxy Z Fold 7 and the Galaxy Z Flip 7. Unfortunately, didn’t happen. However, right after the event, the South Korean tech giant confirmed that it will launch the tri-folding Galaxy phone by the end of 2025 and that will soon decide the device’s name. Well, Samsung seems to have finally decided the phone’s name, and it could be the Galaxy Z TriFold.

    Samsung files trade protection for ‘Galaxy Z TriFold’ in Korea

    Our friends at GalaxyClub have discovered that Samsung has applied for trademark protection for the ‘Galaxy Z TriFold’ name in South Korea, as you can see in the image below. The publication believes that this name could either be for the tri-folding phone or the the lineup it will be part of, which is Galaxy Z.

    However, GalaxyClub and we at SamMobile believe that Galaxy Z TriFold isn’t a very catchy name, and therefore, we are not sure if Samsung will use it for its first tri-folding phone. As such, this name could most likely be for the tri-folding smartphone series. 

    Usually, Samsung and many other brands secure all the potential names for their upcoming product lineups and devices, and end up using only one of them. If that’s the case with the Galaxy Z TriFold name, there’s a possibility that we may not see Samsung using it at all. In short, nothing can be said for sure at the moment. 

    What is certain is that Samsung is seriously working on bringing the tri-folding Galaxy phone to the market considering that it has filed for trademark protection for its potential name. On a related note, previous reports have suggested that the brand may call it Galaxy G Fold.

    Continue Reading

  • Building local research capacity to advance sexual and reproductive health evidence

    Building local research capacity to advance sexual and reproductive health evidence

    Behind every policy and intervention that improves sexual and reproductive health outcomes and access to services, there is research. And behind that research, there must be skilled researchers. With evidence guiding decisions, health systems respond more effectively, services improve and rights are upheld.

    The HRP Alliance’s regional hubs have been demonstrating what it means to build sustainable research capacity in sexual and reproductive health and rights (SRHR). Anchored in the mission to promote health and rights for all, the HRP Alliance, coordinated by the UN’s Special Programme in Human Reproduction (HRP), brings together seven regional ‘hubs’ that serve as catalysts for knowledge, collaboration and innovation.

    Since its establishment in 2017, the HRP Alliance hubs have been empowering local researchers and institutions through training, mentorship, fellowships and institutional support. Moreover, they enable context-specific responses to some of the world’s most pressing SRHR challenges. Seven impact stories document how locally-led research through this initiative has driven global progress.

    In Brazil, the hub for the Americas region at the Campinas Reproductive Health Research Center (CEMICAMP) responded to the Venezuelan migration crisis by training researchers across the region to study the SRHR needs of displaced populations. Their findings on access to care, HIV treatment and sexual violence helped close a major data gap which led to a more human-centred understanding of the needs of displaced populations.

    In Burkina Faso, the Francophone Africa hub, housed at the Health Science Research Institute (IRSS), is creating a regional data and training centre, with 50 Master’s and PhD graduates now leading research and public health efforts across West and Central Africa. Their studies on postpartum contraception and maternal care are informing health strategies.

    In Ghana, the Anglophone Africa hub, housed at the University of Ghana’s School of Public Health, launched a joint master’s programme with the London School of Hygiene and Tropical Medicine. The joint programme has built on years of investment by the HRP Alliance in developing a critical mass of skilled researchers in SRHR. Graduates have gone on to lead national SRHR units and contribute to major studies on adolescent maternal care and quality of services.

    In Kenya, the hub at the African Population and Health Research Center (APHRC) developed a training programme to help researchers and health workers reflect on their personal beliefs and how these might affect their work on sensitive issues like abortion, sexuality and HIV. The model, called values clarification and attitude transformation training, is now being adopted across Africa.

    In Pakistan, the Eastern Mediterranean hub at Aga Khan University worked directly with hospitals during COVID-19, training researchers and influencing maternal care practices, as well as including the adoption of tools to detect maternal sepsis. Their adaptive, hospital-linked approach is now seen as a model for emergency-responsive research.

    In Thailand, the hub for the South-East Asian Region at Khon Kaen University focused its efforts on Myanmar, training a core group of researchers to generate evidence in a fragile setting. Their work on respectful maternity care and cervical cancer screening is now helping to shape maternal health policies, aimed at improving care quality, reducing mistreatment during childbirth and increasing access to lifesaving screening services.

    And in Viet Nam, the hub for the Western Pacific Region at Hanoi Medical University created a dedicated SRHR track within its International Master of Public Health programme, equipping researchers with the tools to address issues relating to adolescent health and gender-based violence. Graduates reported strengthened skills in data analysis, literature review and research presentation, and several went on to work in national health institutions, including the Ministry of Health.

    The stories capture how each hub has been working in its own way. Some prioritize formal academic pathways; others focus on skills development through short courses, mentorship or practical implementation research. All share a common goal: building lasting, regionally-led research ecosystems that respond to regional needs.

    Because when researchers are trained locally, mentored locally and supported to ask the right questions, health systems respond better. SRHR services improve. And people’s rights, choices and dignity are upheld.

    Continue Reading

  • Liam Payne reflects on boy band days in Netflix’s Building the Band

    Liam Payne reflects on boy band days in Netflix’s Building the Band

    Released July 16, Episode 7 of Netflix’s Building the Band features Liam Payne delivering both playful and warm guidance as he judges musical contestants.

    Filmed in September 2024, just weeks before Payne’s death in October at 31, the episode centers on contestants from Midnight ’Til Morning performing the Goo Goo Dolls’ “Iris.”

    Payne begins with lighthearted feedback, noting that their stage movements included “running about, waving your arms in the air.”

    Later, he joked, “We’re amazing dancers, obviously, in One Direction, it’s what we’re known for,” delivering the line with a tongue-in-cheek grin before illustrating with a deliberately understated hip sway modeled after George Michael, proving that strong presence often trumps choreography.

    The tone then shifts to encouragement as Payne tells the performers, “I feel warm,” praising the emotional resonance of their performance and emphasizing chemistry and connection over flashy dance moves.

    Throughout the series, Payne appears as a main judge alongside Kelly Rowland and Nicole Scherzinger.

    The show was posthumously dedicated to his memory following his fall in October 2024. Building the Band remains his final TV project, extending his legacy of mentorship and musical guidance.

    Continue Reading