Dakar High‑Level Preparatory Meeting for the 2026 UN Water Conference, 26 January
High‑Level Opening Segment
Statement of the Chairperson of the United Nations Permanent Forum on Indigenous Issues
Excellencies, Distinguished Ministers,…
Dakar High‑Level Preparatory Meeting for the 2026 UN Water Conference, 26 January
High‑Level Opening Segment
Statement of the Chairperson of the United Nations Permanent Forum on Indigenous Issues
Excellencies, Distinguished Ministers,…

Prime Minister Shehbaz Sharif has expressed fresh resolve to eradicate polio from Pakistan during a meeting with a delegation of the Gates Foundation.
According to details that surfaced on Wednesday, the prime…

Minister of State for Interior Talal Chaudhry on Wednesday criticised the Pakistan Tehreek-e-Insaf (PTI), saying there is no operation under way in Tirah Valley.
Expressing his thoughts, he said that whenever an…

Display
Built with UL-verified phytoplankton bio-resin materials, the world-first display delivers ultra-low-power performance and remote management via Samsung E-Paper App and Samsung VXT
Continue Reading

Few-shot class-incremental learning remains a significant hurdle in artificial intelligence, demanding systems learn new concepts from minimal data without forgetting previously acquired knowledge. Shuai Huang, Xuhan Lin, and Yuwu Lu from the…

In today’s rapidly evolving artificial intelligence environment, organizations are increasingly relying on third-party application programming interfaces from platforms like OpenAI, Google and Amazon Web Services to embed advanced features into their products. These APIs offer significant benefits, particularly in terms of time and cost savings, by enabling companies to leverage existing technology rather than building solutions from scratch.
While this approach can speed up deployment and reduce the burden of managing complex infrastructure, it also raises key legal and privacy issues — like how data flows are controlled, who is responsible for data security, and how licensing restrictions are enforced. The situation becomes even more challenging when the procuring organization opts to use its own API keys instead of those provided by the AI feature developer.
When developers leverage third‑party AI APIs to build and deliver their own AI features, they often do so using their own licensed API keys to access those services. Prompts — for example, data queries, order‑processing commands, or report generation instructions — are sent from the procuring organization’s systems to the developer’s platform and then forwarded to the API provider. The provider applies its AI models and returns outputs, which the developer delivers to the procuring organization.
In this process, the developer assumes the role of the data controller because it determines the purpose and means of processing: it decides which prompts to collect, how to combine or enrich them, including developer-supplied templates, and how outputs are used and delivered. As controller, the developer must ensure lawful processing, provide transparency and implement appropriate technical and organizational measures — such as encryption, access controls, logging and regular audits — to protect personal data throughout the life cycle in line with the EU General Data Protection Regulation.
If there is sensitive data involved — such as personal data under the GDPR or personal health information under the Health Insurance Portability and Accountability Act — the developer, who has control over its API keys, can apply appropriate privacy-enhancing technologies before transmitting. These include measures like anonymization, pseudonymization, zero data retention endpoints, and in-flight filtering, to prevent identification and reduce risk, thereby supporting compliance with applicable data protection laws.
Once the developer submits prompt data to the API provider, the provider acts as a data processor and is responsible for processing data only in accordance with the developer’s documented instructions. To ensure proper governance, the parties should establish a written agreement — such as a data processing agreement that clearly outlines the scope and lawful purposes of processing, as well as the provider’s obligations regarding data retention and deletion.
The agreement should also require the provider to maintain records of processing activities, cooperate with audits, assist the developer with data subject requests and breach notifications, and implement appropriate safeguards — including encryption, access controls, logging, and incident detection/response — all in compliance with GDPR requirements.
As organizations increasingly use AI internally — whether embedding off‑the‑shelf features or developing bespoke capabilities — there is a good chance they already hold API licenses for major platforms such as OpenAI or Azure.
As such, it is increasingly common for procuring organizations to ask that the AI feature developer use the organization’s own API keys to access the feature. This gives the procuring organization more direct control over the data, use and costs associated with the API. However, this shift significantly impacts the role and control of the AI feature developer.
When the procuring organization uses its own API keys to access a developer AI feature, responsibility for transmitting, storing and controlling access to the data mostly shifts to them. This means the developer no longer has full visibility into how the data is handled once it leaves their infrastructure. As a result, it becomes much harder for the developer to verify if safeguards — like encryption, access controls or quick data deletion — are properly in place, or to enforce policies that prevent misuse or breaches.
Because of this, it’s crucial to have clear, well-structured contracts between the developer and the organization. These should lay out who’s responsible for what — covering data security, liability and compliance — and reflect the actual level of control each party has over the data and the API.
Effectively managing third‑party AI integrations requires balancing the benefits of rapid deployment and cost savings with the obligation to address privacy and data protection exposures.
Whether data flows go through company‑controlled APIs or customer‑managed keys, robust data‑governance frameworks ensure risks are equitably allocated and information is safeguarded in line with applicable jurisdictional requirements and the sensitivity of the data involved.
Ultimately, clear contractual responsibilities, active oversight and strong governance are essential when deploying AI features via third‑party APIs, especially as organizations increasingly want to use and control access to the AI capabilities they procure.
Rachel Webber, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is senior counsel for a software as a service and AI organization.

New evidence showed that mental health outcomes were significantly worse in people with atherosclerotic cardiovascular disease (ASCVD) and elevated 10‑year ASCVD risk scores, linking heart disease risk with psychological distress in a large…
ISLAMABAD: Pakistan has imposed strict health screening measures at all international entry points after two cases of the deadly Nipah virus were confirmed in India’s West Bengal, amid growing regional concern over the highly fatal infection.

MELBOURNE, Australia — Vera Zvonareva and Ena Shibahara defeated Australian favorites Talia Gibson and Kimberly Birrell Wednesday evening, and now the doubles semifinals are set.
Here’s a closer look at the WTA Tour…