Author: admin

  • Saudi, Kuwaiti investors file $2 billion arbitration case against Pakistan over K-Electric dispute – Profit by Pakistan Today

    1. Saudi, Kuwaiti investors file $2 billion arbitration case against Pakistan over K-Electric dispute  Profit by Pakistan Today
    2. K-Electric not involved in shareholder arbitration  Mettis Global
    3. Gulf investors launch $2bn arbitration case against…

    Continue Reading

  • Motivation Shapes Memory: NUS-Duke Study Reveals

    Motivation Shapes Memory: NUS-Duke Study Reveals

    Researchers from the Yong Loo Lin School of Medicine, National University of Singapore (NUS Medicine) and Duke University have proposed a neuroscience framework explaining how different types of motivation fundamentally reshape what and…

    Continue Reading

  • Shafiqur unveils Jamaat’s political and economic roadmap, promises future built on ‘insaf’

    Shafiqur unveils Jamaat’s political and economic roadmap, promises future built on ‘insaf’

    Reiterating Jamaat’s stance on governance, Shafiqur promised a rigid stance against corruption, citing the party’s previous performance in government as evidence of its integrity.

    Continue Reading

  • Analyzing Regulatory Gaps Revealed by India’s Response to the Grok Debacle

    Analyzing Regulatory Gaps Revealed by India’s Response to the Grok Debacle

    Union Minister Ashwini Vaishnaw briefs the media in New Delhi on Wednesday, March 5, 2025. (Kamal Singh/PTI via AP)

    What happens when powerful AI tools are released without safeguards into platforms used by millions? This question has occupied headlines after Grok, the generative AI chatbot integrated into the social media platform X was weaponized to non-consensually create and share sexually explicit and degrading images of women and children. The proliferation of such images on X was a direct result of the introduction of an image generation and editing feature to Grok in December 2024. Grok’s subsequent integration with X and the introduction of a “spicy mode” last year exacerbated the abuse by enhancing access and dissemination of such non-consensual intimate imagery (NCII).

    The Grok incident has rightly triggered widespread outrage across jurisdictions, and regulators around the world are taking action, with responses ranging from blocking Grok entirely, such as in Indonesia, to launching an investigation into its functioning, as in the United Kingdom.

    In India, the Ministry of Electronics and Information Technology (MeitY) issued a letter to X on January 2, 2026, citing a failure to comply with statutory due diligence obligations under the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Apart from adherence to the legal framework, MeitY demanded a report on the steps being taken by X to address the issue within 72 hours from the issuance of the letter. However, while the ministry’s response has been relatively swift, there are several deeper and systemic issues that it has exposed.

    First, this response reveals structural problems in how India is currently attempting to govern AI-driven harms. MeitY has not initiated a dedicated regulatory or investigative process for Grok as an AI system. Instead, it has relied almost entirely on the existing intermediary liability framework under the IT Act and the IT Rules to look into the matter. Earlier, MeitY issued an advisory to social media intermediaries on December 29, 2025 which warned against the hosting, uploading, and transmission of obscene, pornographic and other unlawful content and advised them to undertake a review of their internal compliance frameworks. Through both the advisory and its January 2 letter to X, it is evident that MeitY’s approach to this incident is that of a failure of platform compliance with legal obligations and due diligence requirements. Simply put, the government response is built around takedowns and platform enforcement routed through the intermediary liability regime.

    Additionally, the government is currently deliberating on introducing the Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, to combat the rise of deepfakes. These rules would make labelling of all synthetically generated information mandatory. While this step is well-intentioned, concerns related to compelled speech, ambiguous definitions as to what constitutes “synthetic content,” and fears over increased censorship powers that would add to the existing safe harbor framework have been raised. This approach reflects a broader reluctance to regulate and place binding obligations on AI systems and stakeholders within the AI ecosystem. The general sentiment of anti-AI regulation prevalent in India points to apprehensions that the imposition of ‘bureaucratic fetters’ will hinder the development and adoption of AI in India.

    The government’s attitude on wider AI regulation can be seen in the India AI Governance Guidelines released in November 2025. While the Guidelines, which were released under the India AI Mission, acknowledge the risks posed by AI, they largely defer to the extant legal regime, stating that “a separate law to regulate AI is not needed given the current assessment of risks” and that the risks associated with AI can be addressed through voluntary measures and “existing laws.” However, if there’s one thing that the Grok episode highlights, it is that not only has the market failed to regulate itself, but the existing laws which address platform governance have failed to effectively address AI driven harms such as sexualized deepfakes. This has, in turn, exposed a dire need for regulation.

    Grok, as a generative AI model capable of producing illegal and abusive content on demand, is not directly regulated as an AI system, but is only regulated indirectly through X’s obligations as a social media intermediary. If similar harm were to occur on stand-alone AI platforms that are not intermediaries under the IT Act, for instance on other generative AI chatbots, there is a real risk that this would fall into a regulatory grey zone. This leaves India without a clear legal framework to require pre-deployment testing, built-in safety and consent guardrails, or any independent oversight of high-risk generative AI tools.

    At the outset, to drive constructive conversations around AI, the Indian government needs to drop apprehensions that a regulatory approach is likely to be perceived as a backward response to emerging technologies. Further, there is an urgent need to move past the reductive, binary narrative that regulation strangles innovation, as this line of thinking leads to the adoption of a light-touch regulatory and self-regulatory codes administered by industry without oversight, an approach that results in episodes such as the Grok incident. Instead, the way forward ought to be one that embraces regulatory responses that lead to tangible accountability from all stakeholders in the AI value chain. To do this, a participatory approach to AI governance is essential. The government ought to consider conducting an open, public, multi-stakeholder consultation that would expand the conversation of AI regulation beyond the framework of the IT Act as a good first step in this direction.

    Unless it is accepted that systemic changes need to be made to address AI-enabled harms such as NCII, a platform moderation approach is likely to change little. What is necessary is prioritizing ‘Safety by Design’ and mandates for adversarial testing (so-called red teaming), adherence to technical standards (like C2PA) to label AI-generated content, addressing the existence of NCII and Child Sexual Abuse Material (CSAM) in training data sets, and an investment in the development of tools that detect and report such content. Otherwise, any promises to tackle AI-generated sexual abuse would ring hollow.

    Lastly, it is also important to touch upon another issue that plagues the content moderation approach for NCII in India, that of framing the issue merely in terms of “obscene” or “vulgar” content. This approach misses the core harm involved in image-based sexual abuse: a complete lack of consent. The defining feature of NCII is not the subjective assessment of whether a particular image or video crosses the threshold to be deemed as “obscene” or “vulgar.” Instead, the primary violation is the creation of such an image in the absence of any meaningful consent of the person who is depicted. This violation continues to subsist regardless of whether the content is considered to be obscene in nature or not. Reducing such abuse to a question of obscenity collapses this distinct harm with the imposition of moral and socio-cultural standards. Therefore, a framework based on consent and a rights-based understanding arguably offers a better path forward that safeguards the interests of victims. It would allow regulators and platforms to respond to abuse in a victim-focused way while still respecting constitutional free speech protections.

    The Grok incident was an easily predictable outcome of a governance model that allows powerful AI systems to be deployed at scale without enforceable, ex-ante safety obligations. When such tools which have the capacity to shape behavior, reputation, personal autonomy, dignity and safety are made available in the mainstream, the harm they can cause is not confined to a few users but has a ripple effect that is difficult to contain. While X admitted to failures and stated that it had taken down the offending content, this was a reactive measure taken only after large volumes of harmful material had already been generated and widely circulated. Moreover, neither the full scale of the harm, the number of victims who were affected, nor the adequacy of the fixes has been independently verified.

    Reportedly, MeitY was dissatisfied with the platform’s initial response as well. Meanwhile the broader issue of the dissemination of NCII and CSAM continues to remain unresolved and risks being forgotten. We await MeitY’s further action on this matter.

    Continue Reading

  • Systemic immune-inflammation index outperforms conventional inflammato

    Systemic immune-inflammation index outperforms conventional inflammato

    Introduction

    The global burden of heart failure (HF) continues to escalate, as evidenced by epidemiological surveillance data from European cohorts indicating an annual incidence of approximately 5 cases per 1000 person-years, with an estimated…

    Continue Reading

  • OnePlus to be dismantled? The smartphone that challenged the iPhone aura may be quietly shut down, says a shocking report

    OnePlus to be dismantled? The smartphone that challenged the iPhone aura may be quietly shut down, says a shocking report

    OnePlus, once known for shaking up the smartphone market with bold launches and fan-driven buzz, may be heading towards an uncertain future. A new report claims the brand is being slowly dismantled by its parent company, Oppo, even though there…

    Continue Reading

  • The Warriors are preparing for life without Jimmy Butler

    The Warriors are preparing for life without Jimmy Butler

    An MRI exam Monday after the injury revealed Jimmy Butler tore the ACL in his right knee during the third quarter.

    SAN FRANCISCO (AP) — Steve Kerr and the Golden State Warriors are still coming to terms with how dramatically their season…

    Continue Reading

  • LEPAS Opens Its World’s First Showroom in Indonesia, Ushering in a New Chapter in the NEV Market

    LEPAS Opens Its World’s First Showroom in Indonesia, Ushering in a New Chapter in the NEV Market

    JAKARTA, Indonesia, Jan. 21, 2026 /PRNewswire/ — On January 19, LEPAS, the all-new NEV brand under Chery Group, officially opened its World’s First Showroom in Jakarta, Indonesia. More than a hundred attendees, including…

    Continue Reading

  • Scientists Discover a New Quantum State of Matter Once Considered Impossible : ScienceAlert

    Scientists Discover a New Quantum State of Matter Once Considered Impossible : ScienceAlert

    A quantum state of matter has appeared in a material where physicists thought it would be impossible, forcing a rethink on the conditions that govern the behaviors of electrons in certain materials.

    The discovery, made by an international team…

    Continue Reading

  • Arsenal seal top two Champions League finish after Inter win – standard.co.uk

    Arsenal seal top two Champions League finish after Inter win – standard.co.uk

    1. Arsenal seal top two Champions League finish after Inter win  standard.co.uk
    2. ‘Tears in my eyes’ – Jesus enjoys ‘dream night’ in San Siro  BBC
    3. Team news: Seven changes made for San Siro visit  Arsenal.com
    4. Inter vs Arsenal LIVE! Champions League…

    Continue Reading