Philips have unveiled a further gaming monitor in their Evnia 3000 series. The 25M2N3200U is a 24.5″ ‘Fast IPS’ panel with a 1920 x 1080 resolution and a native 310Hz refresh rate which can be overclocked to 310Hz.
Philips have unveiled a further gaming monitor in their Evnia 3000 series. The 25M2N3200U is a 24.5″ ‘Fast IPS’ panel with a 1920 x 1080 resolution and a native 310Hz refresh rate which can be overclocked to 310Hz.
Across sub-Saharan Africa, a quiet revolution is underway. Equipped with smartphones and empowered by broadband connectivity, millions of micro-entrepreneurs are transforming how goods and services are produced and sold in local economies. This isn’t only a tech trend, but a paradigm shift in the economic fabric of the region, which boasts the highest rate of entrepreneurship in the world, with over 22% of working-age Africans launching new ventures. These agile businesses, rooted in socioeconomic adaptation and innovation, are increasingly powered by digital tools that enable mobile payments, online marketplaces, and real-time customer engagement. Mobile internet penetration in Africa has tripled over the past decade, now reaching over 527 million subscribers. With smartphone adoption projected to hit 88% by 2030 and more users accessing the web via mobile devices, digital platforms are rapidly becoming the backbone of economic inclusion. But what factors drive this digital adoption by micro-entrepreneurs? And how does it shape productivity, growth, and policy design? Understanding these forces is not just an academic exercise. It is essential for crafting public economic policies and private sector-led growth-enhancing measures that unlock inclusive, robust, and sustainable development across Africa.
Digital technology is reshaping micro-entrepreneurship across Africa’s agrifood systems, particularly in informal markets where traditional infrastructure remains limited. As mobile broadband expands and smartphone access deepens, digital platforms are emerging as critical tools for trade while themselves being accelerated by growing African trade. This mutually reinforcing cycle leads to job creation and financial inclusion, thereby bridging the formal-informal divide and boosting productivity and output growth. This transformation is particularly apparent in agricultural value chains, where small-scale intermediaries, often women, play a vital role in linking producers to consumers.
A recent study we conducted in Benin highlights these dynamics. In a country where grains and legumes account for 90% of food consumption, food intermediation remains predominantly a women-led subsistence activity. Using data from Bohicon and Ouando, two semi-rural markets in the country, our research shows that 80% of food traders are women, 60% manage teams of six or more workers, and 90% have over a decade of trading experience. This underscores the maturity and economic significance of these informal micro-enterprises. Although most of these micro-entrepreneurs (52%) have no formal education, the sector is rapidly adapting to digitalization, with mobile broadband penetration rising from under 2% to 42% in just ten years.
Nearly half (49%) of surveyed micro-entrepreneurs have adopted digital technologies to trade their products, reshaping how these informal businesses operate. These adopters tend to be more educated than their peers, conduct larger and more frequent transactions, and are embedded in digitally active networks, suggesting that proximity to other users reinforces adoption through network effects. Yet the promise of digital technology adoption comes with notable constraints: 54% of surveyed adopters face connection costs at or above 20% of the national minimum monthly wage, while 45% blame poor internet quality as a major barrier to efficient business operations. These frictions highlight the need for tailored infrastructure and affordability solutions to ensure that digital transformation reaches its full potential among Africa’s micro-entrepreneurs. Our analysis suggests that digital adoption translates into greater productivity: Adopters outperform their peers based on a range of productivity measures. They also report a 50% increase in both trading frequency and volume, indicating that digital tools are not just modern conveniences, but powerful catalysts for scaling informal enterprises and unlocking latent economic potential.
This sectoral case, among others, illustrates how digital adoption among micro-entrepreneurs is not only accelerating but also redefining the contours of economic participation, especially for women in informal agrifood systems. It also underscores the catalytic role of digitalization in improving food security.
The adoption of digital technologies among micro-entrepreneurs is not random. It reflects deeper patterns of socioeconomic access and network exposure. Our study reveals that younger, wealthier, and more educated individuals are significantly more likely to integrate digital tools into their business practices. Proximity to other digital users also reinforces adoption, suggesting that peer influence and community-level networks play a crucial role in shaping tech-savviness and digital behavior.
These findings carry important implications for digital inclusion strategies across Africa. First, they underscore the need to address structural inequalities in education and income that stifle digital opportunities. Without well-calibrated and targeted interventions, digital transformation could exacerbate existing divides between more advantaged entrepreneurs and those left behind in low-resource settings.
Second, the power of network effects points to the value of localized digital ecosystems. Policies that support digital hubs, peer learning, and community-based training can amplify adoption by leveraging social proximity and trust. Beyond focusing exclusively on individual capacity-building, policymakers could consider how to productively activate collective digital readiness, especially in a continent historically known for a high degree of communality.
Finally, our finding that the variables of gender, experience, or formal business affiliations do not have a significant effect on level of digital adoption suggests that traditional segmentation may miss key levers of digital transformation in the African context. Digital adoption appears to be less about identity or tenure, and more about access, exposure, and perceived utility. This calls for flexible, tailored, and context-sensitive approaches that prioritize connectivity, affordability, and relevance over rigid gender or social network targeting.
In sum, fostering inclusive digital adoption requires more than expanding digital infrastructure. It calls for a nuanced understanding of who adopts, why, and under what conditions they do. Digital adoption in Benin’s agrifood systems offers valuable insights for designing more effective and equitable digital policies across the continent.
Unlocking the full potential of digital technologies for micro-entrepreneurs in Africa requires a multi-pronged policy approach that addresses both infrastructure and financial constraints. Evidence from our Benin study underscores the importance of well-thought out and targeted interventions to enhance digital uptake and economic performance among micro-entrepreneurs, especially in informal markets.
First, improving the quality of mobile broadband emerges as a high-impact lever. Our structural model simulations reveal that upgrading internet reliability and speed yields the most significant gains in both adoption rates and productivity. This finding points to the need for sustained and cost-effective investments in digital infrastructure, particularly in semi-rural and underserved areas, where poor connectivity continues to hinder business operations.
Second, while reducing the cost of mobile broadband access has a positive effect, its impact is more modest when broadband connection quality remains low. This suggests that affordability policies must be coupled with service quality improvements to be fully effective. Governments and telecom providers can explore tiered pricing models, public-private partnerships, or intra-platform competition to lower entry barriers without compromising service standards.
Third, easing credit constraints is essential for enabling micro-entrepreneurs to invest in digital tools and scale their operations. Many informal micro-enterprises lack access to formal finance, limiting their ability to purchase smartphones, pay for data plans, or adopt productivity-enhancing digital platforms. Expanding access to microfinance, mobile money, and alternative credit scoring mechanisms, especially those leveraging transactional and utility data, can help bridge this gap.
Together, these findings highlight the need for integrated digital inclusion strategies that combine infrastructure upgrades, affordability measures, and financial empowerment. By aligning these efforts with the lived realities of micro-entrepreneurs, many of whom are women operating in mature but underserved sectors, policymakers can foster a more inclusive and resilient digital economy, and in turn, lift productivity, output, employment, and livelihoods across Africa.
WUHAN, Oct. 10 (Xinhua) — Top seed Aryna Sabalenka extended her unbeaten run to 20 matches in Wuhan Open, defeating Elena Rybakina 6-3, 6-3 to reach the semifinals, while second seed Iga Swiatek was stunned by Jasmine Paolini of Italy here on…
A vivid Aurora Borealis display reflected in the Glendo Reservoir in Glendo State Park, Wyoming. The Northern Lights may be visible on camera — and possibly to the naked eye — across 12 northern U.S. states and Canada overnight on Saturday,…
Nexans, a global leader in the design and manufacturing of cable systems and energy solutions, today announced the successful conclusion of its 2025 Innovation Summit in Toronto. The event brought together global leaders from energy, policy, finance, and technology to address one of the defining challenges of our time: how to expand and modernize transmission infrastructure to meet surging electricity demand in the era of AI, electrified transport, and digital growth.
The Summit, themed “A New Era of Electrification,” underscored a powerful message: transmission is no longer a technical afterthought – it is the strategic lever of global electrification.
An open letter attributed to “93 employees and ex-employees” of Build A Rocket Boy has accused the studio’s leadership of “longstanding disrespect and mistreatment of your staff.” The open letter is accompanied by news that…
The use of AI tools is proliferating and becoming mainstream. Allied to fast-moving developments in the technology, it is becoming increasingly difficult to distinguish AI-generated content – including deepfakes (i.e. images, video or audio intended to impersonate an individual’s likeness or voice) – from human-generated and authentic content. Deepfake technology isn’t, in itself, particularly new, but the ease and scale with which deepfakes can now be produced and disseminated, without easy detection or challenge, has led to urgent calls for a review of regulation in this area.
‘Digital replicas’ (a more benign expression for ‘deepfakes’) can, of course, be created for positive uses. The technology has been used to de-age the actor Harrison Ford in the movie Indiana Jones and the Dial of Destiny and to reanimate deceased actors (such as Carrie Fisher) on screen. But, when digital replicas are made without consent, they can be put to more nefarious uses. Ofcom summarised these risks well in a recent study when it noted that deepfakes can be used to “demean, defraud and disinform“. Many famous people have been the subject of deepfakes, from Taylor Swift through to Stephen Fry and the financial journalist Martin Lewis, but the problem also impacts non-celebrities, and sometimes in devastating ways.
The problem for those impacted is that there is no overarching law regulating deepfakes in the UK. Instead, there is a patchwork of existing laws (for example, IP, data protection, defamation, malicious falsehood) alongside existing laws that meet particular harms (such as the use of deepfakes in fraudulent activity). Importantly, current regulatory focus is on the creation and dissemination of non-consensual intimate images in the form of deepfakes, where the Government has taken a number of steps to introduce criminal sanctions, with more developments to come shortly. These developments have been hard fought for, and greatly welcomed by campaigners, but there are still gaps in the legislation, for example, there is nothing yet to address “nudifying” or “undressing” apps, which remove clothing from images.
In addition to the overly complex nature of the current regulatory framework, those impacted by deepfakes face the additional difficulty of tracking down those who create or disseminate such images. Even if they can be identified, the perpetrators are often based overseas and out of reach of the UK regulatory authorities. While contractual protections may assist for some individuals (for example, performers, who may wish to contract against having their performance used to train an AI model), there is no one size fits all approach to this enforcement question. Accordingly, in addition to enhanced regulation, many are looking to the role of the AI model developers and the large tech platforms (including social media) in detecting and expeditiously removing such content, enforcing their terms of use, and, where possible, preventing such content being generated in the first place. But our experience has been that the platforms are often slow to react, which can be detrimental where content can go viral rapidly online.
There are some potential claims that an individual might make in relation to the use of their likeness (image or voice) in a deepfake, some of which are currently less relevant to non-celebrities, but where we may see calls to broaden out the protection available.
In the UK, there are certain forms of IP rights that might be available to provide protection for an individual’s likeness. However, there is no form of personality right or image right in the UK (unlike in some other countries). Potential IP rights that might arise include:
Separately, performers have certain rights in their performances (there is no requirement to be a celebrity to rely upon these rights), as well as certain moral rights (though in practice moral rights are often waived by performers). While these rights may be relied upon to tackle unauthorised uses of a performance, the performers’ union, Equity, has called on the government to strengthen performers’ rights to encourage licensing and prevent unauthorised AI-related uses. In particular, Equity is lobbying for increased transparency measures and additional rights, including in relation to performance synthesisation, image rights and unwaivable moral rights. It is also concerned about the terms of contracts used by production companies for training AI models/generating digital replicas, citing the example of a performer whose likeness was used as a ‘performance avatar’ and who later discovered it being used to promote the Venezuelan government.
Copyright may, however, have a greater role to play in relation to deepfakes going forward. The Danish government is considering using copyright law to regulate deepfakes by making unauthorised sharing of AI-generated deepfakes illegal, including in relation to deepfakes of non-celebrities. Individuals would be able to demand removal of the images, as well as compensation, and the right would last for up to 50 years after their death. Meanwhile, the central proposal of the US Copyright Office’s report on Digital Replicas is a new federal law to deal with unauthorised digital replicas (again, which would be available for all individuals, not just celebrities), on the grounds that existing laws in the US do not provide sufficient legal redress. A number of US sates have also proposed such laws.
In the UK, the Government is currently conducting a consultation process in relation to AI and copyright. While the consultation does not formally consult on specific proposals on digital replicas and personality rights, the Government has said that it is keen to hear views on the topic. This could include whether the current legal framework provides sufficient control to performers over the use of their likenesses/performances (perhaps involving consideration of whether performers should be able to opt their performances out of being used to train AI models).
Information which “relates to” an identified or identifiable individual is their “personal data”, and will, as a general principle, mean that the data subject has rights arising, and those who process the personal data have obligations imposed on them. “Inaccurate” data is still personal data, and, by extension, there is certainly a strong argument that a deepfake of an identifiable individual will also be their personal data. This means that affected individuals potentially have the right to request erasure, or to bring complaints or claims, under the UK GDPR.
A deepfake could give rise to a claim in defamation if it contains false and defamatory information and causes the subject serious reputational harm. Consider a politician who becomes the subject of a fake video where they admit to wrongdoing. The merits will depend on multiple factors including the meaning, nature and extent of publication of the deepfake, and the evidence of reputational harm. There may also be problems locating and identifying the source of the deepfake/its author, problems establishing the liability of any platform hosting the deepfake, and jurisdictional hurdles if they/the platform are based outside of the UK.
Where a deepfake contains true but private and/or confidential information, the subject may be able to bring a claim for misuse of private information and/or breach of confidence if they did not consent to the information being used and shared in this way. What constitutes “private information” is not defined in law, but it is established that it includes information such as: medical information, details of a person’s sexuality and sex life, and details of their home or family life.
The UK Government has recently introduced various pieces of legislation aimed at criminalising conduct around non-consensual intimate deepfakes. As of 31 January 2024, legislation brought in by the Online Safety Act 2023 and inserted into the Sexual Offences Act 2003 criminalises the sharing, or threatening to share, of intimate deepfakes without consent. In addition, the Data (Use and Access) Act 2025 which has recently received Royal Assent, contains provisions criminalising the creation, and requesting of the creation, of intimate deepfakes without consent (note that these provisions are not yet in force, although their enactment is eagerly awaited).
The EU’s AI Act is a wide-ranging piece of legislation regulating the development and deployment of AI, including generative AI. One of the bedrocks of ensuring trustworthiness and integrity of AI systems is a robust framework of transparency requirements which enables people to know when they are interacting with or are exposed to AI systems and their outputs (including deepfakes or other manipulated content). In that context, the EU AI Act contains a number of transparency requirements, including in relation to deepfakes, which will start to apply from 2 August 2026.
The European Commission has recently published a consultation on the AI Act’s transparency requirements. The responses to its consultation will inform the drafting of Commission guidelines and a Code of Practice on the detection and labelling of artificially generated or manipulated content.
Specifically, in relation to deepfakes and other generated content, Article 50 of the EU AI Act requires:
Of course, the position in relation to transparency and labelling of AI content is not straightforward, both legally and practically. Many organisations, for example, have partnered with the Coalition for Content Provenance and Authenticity (C2PA) to add labels to AI-generated content (e.g. LinkedIn). These tags are automatically added based on embedded code data in the images, as identified by the C2PA process. However, this may easily be circumvented by stripping the metadata from digital files. It must therefore be anticipated that the discussions around the proposed Code of Practice will lead to a range of (potentially conflicting) viewpoints that may require compromises to be reached in certain areas.
While a number of legal measures are available for individuals who find that their likeness or voice has been used in a deepfake (as well as preventative measures to protect against creation in the first place), the framework for taking action remains a complex one, and so we would recommend anyone impacted to seek specialist legal advice. Those needing support with non-consensual intimate image deepfakes can contact services such as the Revenge Porn Helpline, who provide free assistance with the removal of intimate imagine including deepfakes shared without consent from the internet. The Police also have published guidance on reporting potential criminal offences involving deepfakes.
If you would like to discuss issues relating to deepfakes, including how to take action to protect against digital replicas being created and shared, please get in touch with a member of the team.
Indonesia will not allow Israeli gymnasts to compete at the Artistic Gymnastics World Championships, which begin on Oct. 19 in Jakarta, an Indonesian official announced Thursday.
“The Indonesian government has refused visas to Israeli athletes…
This request seems a bit unusual, so we need to confirm that you’re human. Please press and hold the button until it turns completely green. Thank you for your cooperation!