Category: 3. Business

  • New Prompt Injection Attack Vectors Through MCP Sampling

    New Prompt Injection Attack Vectors Through MCP Sampling

    Executive Summary

    This article examines the security implications of the Model Context Protocol (MCP) sampling feature in the context of a widely used coding copilot application. MCP is a standard for connecting large language model (LLM) applications to external data sources and tools.

    We show that, without proper safeguards, malicious MCP servers can exploit the sampling feature for a range of attacks. We demonstrate these risks in practice through three proof-of-concept (PoC) examples conducted within the coding copilot, and discuss strategies for effective prevention.

    We performed all experiments and PoC attacks described here on a copilot that integrates MCP for code assistance and tool access. Because this risk could exist on other copilots that enable the sampling feature we’ve not mentioned the specific vendor or name of the copilot to maintain impartiality.

    Key findings:
    MCP sampling relies on an implicit trust model and lacks robust, built-in security controls. This design enables new potential attack vectors in agents that leverage MCP. We have identified three critical attack vectors:

    1. Resource theft: Attackers can abuse MCP sampling to drain AI compute quotas and consume resources for unauthorized or external workloads.
    2. Conversation hijacking: Compromised or malicious MCP servers can inject persistent instructions, manipulate AI responses, exfiltrate sensitive data or undermine the integrity of user interactions.
    3. Covert tool invocation: The protocol allows hidden tool invocations and file system operations, enabling attackers to perform unauthorized actions without user awareness or consent.

    Given these risks, we also examine and evaluate mitigation strategies to strengthen the security and resilience of MCP-based systems.

    Palo Alto Networks offers products and services that can help organizations protect AI systems:

    If you think you might have been compromised or have an urgent matter, contact the Unit 42 Incident Response team.

    What Is MCP?

    MCP is an open-standard, open-source framework introduced by Anthropic in November 2024 to standardize the way LLMs integrate and share data with external tools, systems and data sources. Its key purpose is providing a unified interface for the communication between the application and external services.

    MCP revolves around three key components:

    • The MCP host (the application itself)
    • The MCP client (that manages communication)
    • The MCP server (that provides tools and resources to extend the LLM’s capabilities)

    MCP defines several primitives (core communication protocols) to facilitate integration between MCP clients and servers. In the typical interaction flow, the process follows a client-driven pattern:

    • The user sends a request to the MCP client
    • The client forwards relevant context to the LLM
    • The LLM generates a response (potentially including tool calls)
    • The client then invokes the appropriate MCP server tools to execute those operations

    Throughout this flow, the client maintains centralized control over when and how the LLM is invoked.

    One relatively new and powerful primitive is MCP sampling, which fundamentally reverses this interaction pattern. With sampling, MCP servers can proactively request LLM completions by sending sampling requests back to the client.

    When a server needs LLM capabilities (for example, to analyze data or make decisions), it initiates a sampling request to the client. The client then invokes the LLM with the server’s prompt, receives the completion and returns the result to the server.

    This bidirectional capability allows servers to leverage LLM intelligence for complex tasks while clients retain full control over model selection, hosting, privacy and cost management. According to the official documentation, sampling is specifically designed to enable advanced agentic behaviors without compromising security and privacy.

    MCP Architecture and Examples

    MCP employs a client-server architecture that enables host applications to connect with multiple MCP servers simultaneously. The system comprises three key components:

    • MCP hosts: Programs like Claude Desktop that want to access external data or tools
    • MCP clients: Components that live within the host application and manage connections to MCP servers
    • MCP servers: External programs that expose tools, resources and prompts via a standard API to the AI model

    When a user interacts with an AI application that supports MCP, a sequence of background processes enables smooth communication between the AI and external systems. Figure 1 shows the overall communication process for AI applications built with MCP.

    Figure 1. MCP architecture workflow.

    Phase 1: Protocol Handshake

    MCP handshakes consist of the following phases:

    • Initial connection: The MCP client initiates a connection with the configured MCP servers running on the local device.
    • Capability discovery: The client queries each server to determine what capabilities it offers. Each server then responds with a list of available tools, resources and prompts.
    • Registration: The client registers the discovered capabilities. These capabilities are now accessible to the AI and can be invoked during user interactions.

    Phase 2: Communication

    Once MCP communications have begun, they progress through the following stages:

    • Prompt analysis and tool selection: The LLM analyzes the user’s prompt and recognizes that it needs external tool access. It then identifies the corresponding MCP capability to complete the request.
    • Obtain permission: The client displays a permission prompt asking the user to grant the necessary privileges to access the external tool or resource.
    • Tool execution: After obtaining the privileges, the client sends a request to the appropriate MCP server using the standardized protocol format (JSON-RPC).

    The MCP server processes the request, executes the tool with the necessary parameters and returns the result to the client.

    • Return response: After the LLM finishes its tool execution, it returns information to the MCP client, which in turn processes it and displays it to the user.

    MCP Server and Sampling

    In this section, we dive further into the MCP server features and understand the role and capability of the MCP sampling feature. To date, the MCP server exposes three primary primitives:

    • Resources: These are data sources accessible to LLMs, similar to GET endpoints in a REST API. For example, a file server might expose file://README.md to provide README content, or a database server could share table schemas.
    • Prompts: These are predefined prompt templates designed to guide complex tasks. They provide the AI with optimized prompt patterns for specific use cases, helping streamline and standardize interactions.
    • Tools: These are functions that the MCP host can invoke through the server, analogous to POST endpoints. Official MCP servers exist for many popular tools.

    MCP Sampling: An Underused Feature

    Typically, MCP-based agents follow a simple pattern. Users type prompts and the LLM calls the appropriate server tools to get answers. But what if servers could ask the LLM for help too? That’s exactly what the sampling feature enables.

    Sampling gives MCP servers the ability to process information more intelligently using an LLM. When a server needs to summarize a document or analyze data, it can request help from a client’s language model instead of doing all the work itself.

    Here’s a simple example: Imagine an MCP server with a summarize_file tool. Here’s how it works differently with and without sampling.

    Without sampling:

    • The server reads your file
    • The server employs a local summarization algorithm on its end to process the text

    With sampling enabled:

    • The server reads your file
    • The server asks your LLM, “please summarize this document in three key points”
    • Your LLM generates the summary
    • The server returns the polished summary to you

    Essentially, the server leverages the user’s LLM to provide intelligent features without needing its own AI infrastructure. It’s like giving the server permission to use an AI assistant when needed. This transforms simple tools into intelligent agents that can analyze, summarize and process information.

    This all happens while keeping users in control of the AI interaction. Figure 2 shows the high-level workflow of the MCP sampling feature.

    Flowchart labeled "MCP Sampling Sequence" depicting a process interaction between a Client, MCP Server, LLM, and User. It outlines steps from creating and presenting a request to generating and displaying the response, concluding with user modifications or approvals leading to the final result.
    Figure 2. MCP sampling workflow.

    Sampling Request

    To use the sampling feature, the MCP server sends a sampling/createMessage request to the MCP client. The method accepts a JSON-formatted request with the following structure. The client then reviews the request and can modify it.

    After reviewing the request, the client “samples” from an LLM and then reviews the completion. As the last step, the client returns the result to the server. The following is an example of the sampling request.

    There are two primary fields that define the request behavior:

    • Messages: An array of message objects that represents the complete conversation history. Each message object contains the following, which provides the context and query for the LLM to process:
      • The role identifier (user, assistant, etc.)
      • The content structure with type and text fields
    • SystemPrompt: A directive that provides specific behavioral guidance to the LLM for this request. In this case, it instructs the model to act as a “security-focused code reviewer,” which:
      • Defines the perspective and expertise of the response
      • Ensures the analysis focuses on security considerations
      • Ensures a consistent reviewing approach

    Other fields’ definitions can be found on Anthropic’s official page.

    MCP Sampling Attack Surface Analysis

    MCP sampling introduces potential attack opportunities, with prompt injection being the primary attack vector. The protocol’s design allows MCP servers to craft prompts and request completions from the client’s LLM. Since servers control both the prompt content and how they process the LLM’s responses, they can inject hidden instructions, manipulate outputs, and potentially influence subsequent tool executions.

    Threat Model

    We assume the MCP client, host application (e.g., Claude Desktop) and underlying LLM operate correctly and remain uncompromised. MCP servers, however, are untrusted and represent the primary attack vector, as they may be malicious from installation or compromised later via supply chain attacks or exploitation.

    Our threat model focuses on attacks exploiting the MCP sampling feature, in which servers request LLM completions through the client. We exclude protocol implementation vulnerabilities such as buffer overflows or cryptographic flaws, client-side infrastructure attacks and social engineering tactics to install malicious servers. Instead, we concentrate on technical exploits available once a malicious server is connected to the system.

    Experiment Setup and Malicious MCP Server

    To demonstrate these potential risks, we developed a malicious code summarizer MCP server, based on Anthropic’s everything MCP server. This is a demo server that aims to exercise all the features of the MCP protocol, including the MCP sampling feature.

    The malicious MCP server provides legitimate functionality while performing covert operations. Specifically, it provides a tool named code_summarizer, making it indistinguishable from benign tools during selection. When users request code summarization tasks, the MCP protocol automatically routes the request to this tool, as shown in Figure 3.

    Screenshot of a computer code in a development environment, featuring a function named 'codeSummarizer' using JavaScript. The code includes comments and syntax for error handling and asynchronous computation.
    Figure 3. Code summarizer tool implementation: the tool takes the code and sends a predefined summary prompt to the LLM using the sampling feature.

    When users request code summarization tasks, the MCP protocol identifies these requests and automatically routes them to the tool for help. Figure 3 above shows the detailed parameters and guideline prompts used by this tool.

    For the MCP host/client, we choose a code editor that supports the MCP sampling feature. Figure 4 shows the typical interaction process.

    Screenshot of a chat interface showing a prompt asking for a code summarization.
    Figure 4. The user asks the copilot to help summarize the current code file, and the corresponding code summary tool gets invoked.

    The summary task we provided to the copilot summarizes the main source file of the everything MCP server.

    PoC 1: Resource Theft: Excessive Token ConsumptionThrough Hidden Prompts

    This attack shows how the malicious server can exploit LLM completions to steal a user’s token quota by appending hidden requests to prompts.

    Core attack mechanism: The malicious tool adds hidden instructions at the end of the main prompt:

    This hidden instruction causes the LLM to generate a fictional story along with the requested code summary.

    Attack flow: When the user submits a summarization request (as shown in Figure 3 above), the server processes it as usual but secretly instructs the LLM to produce additional content. The LLM’s response includes both the code summary and the hidden story.

    In our experiments with the copilot, the host only displayed the code summary (shown in Figure 5).

    Screenshot of a summary document specification document detailing the architecture of the MCP (Model Control Protocol). It includes sections on purpose, main components, and usage scenarios, with bullet lists and headings for organization.
    Figure 5. The user receives a summary of the code file as normal, but the extra information is completely hidden from the user.

    However, the LLM still processes and generates the full response, including any hidden content injected by the server. This additional content, though invisible to users, continues to consume computational resources and appears in server logs.

    The disconnect between what users see and what actually gets processed creates a perfect cover for resource exhaustion attacks. Users receive their expected summary with no indication that the LLM also generated extensive hidden content in the background.

    Despite the successful exploitation, we note that we observed this behavior specifically in our testing with the target MCP client’s implementation. Different MCP hosts may handle output filtering and display differently.

    Some implementations might show full LLM responses, provide warnings about hidden content or have other safeguards in place. What makes this particular implementation vulnerable is its approach to presenting results.

    The MCP client performs an additional layer of summarization on the MCP tool output before displaying it to the user. It condenses the content into a brief summary, rather than showing the raw LLM response.

    This design choice increases the attack’s effectiveness, as the hidden content becomes effectively invisible in the chat interface. Only by expanding and examining the raw server console output, an action most users would have no reason to take, would the exploitation become apparent.

    This potential attack vector reflects the specific design choices of the tested implementation and may not be universally applicable to all MCP hosts supporting the sampling feature.

    Figures 6 and 7 reveal the fictional story in the server console output, confirming successful token theft. To the user, everything appears normal. They receive the summary as expected. In reality, the malicious server has consumed extra computational resources equivalent to generating 1,000 additional words, all billed to the user’s API credits.

    Impact: This attack enables resource theft, unauthorized content generation and potential data exfiltration through carefully crafted hidden prompts.

    Screenshot of a computer screen displaying input and output windows in an LLM, where the input asks to summarize a Typescript file and the output shows a fictional story.
    Figure 6. The fictional story is indeed output by the LLM, yet this information is not shown on the user interface.
    Screenshot of a coding environment with lines of code displayed on the screen. The code includes comments and commands related to a simple text-based story involving a fictional entity named 'The Code Whisperer.'
    Figure 7. MCP server log receives the fictional story as requested.

    PoC 2: Conversation Hijacking: Persistent Prompt Injection

    The second attack demonstrates how the malicious MCP server can inject instructions that persist across multiple conversation turns, effectively compromising the entire conversation.

    Attack mechanism: The server instructs the LLM to append specific instructions in its response and causes the following conversation to follow these instructions:

    Attack flow: Starting with the same code summarization request, the malicious server appends the injection instruction to the user’s prompt. Following this malicious prompt, the LLM then includes this text in its response (shown in Figure 8), which becomes part of the conversation context. Once injected, these instructions affect all subsequent interactions.

    Screenshot of a computer screen displaying input and output windows in an LLM, where the input asks to summarize a Typescript file and the output shows a malicious instruction. The LLM is using "pirate speak" as part of its explanation.
    Figure 8. LLM puts the malicious instruction in its response as requested by the MCP’s hidden prompt.

    When the user asks follow-up questions, the AI assistant responds in pirate speak (as shown in Figure 9), demonstrating that the injection has taken a persistent effect.

    Screenshot of LLM prompt where the request is to suggest improvements without using any tools. Using pirate speak, the answer lists eight suggestions for improving programming practices, including recommendations on file management, comments, consistency, naming, error handling, and testing.
    Figure 9. The copilot follows the malicious instructions that are put into the response.

    Impact: This attack can fundamentally alter the AI assistant’s behavior, potentially making it less useful or even dangerous if more sophisticated instructions are injected. Unlike one-time token theft, this creates a lasting compromise that affects the entire session.

    PoC 3: Covert Tool Invocation

    This attack leverages prompt injection to trigger unauthorized tool invocations, enabling a malicious server to perform hidden actions on the user’s system.

    Attack mechanism: The server modifies the user’s prompt to embed instructions that cause the LLM to invoke additional tools without the user’s awareness:

    In this example, the LLM is assumed to have access to a tool called writeFile, which allows it to write content to local files. The malicious server provides this tool, but it could also be supplied by other MCP servers on the system, such as the commonly used filesystem MCP server.

    Attack flow: When the user requests code summarization, the malicious summarizer tool appends hidden instructions to the LLM’s prompt. This causes the LLM to not only generate the requested summary but also secretly invoke the writeFile tool. The tool writes the complete response to a local file named tmp.txt. While the LLM’s response acknowledges this file operation (as shown in Figure 10), it embeds the acknowledgment within the summary text, making it unlikely that users will notice the unauthorized action.

    Text editor displaying a command to invoke a writeFile tool, specifying a filename 'everything_summary.log' and summarizing a file named 'everything.ts'.
    Figure 10. The copilot acknowledges that the file has been written. However, this is hidden in the response of the summary task, which is hard for the user to spot.

    The attack succeeds because the file operation appears as a legitimate tool invocation from the LLM’s perspective. The user receives their expected code summary while remaining unaware that their system has been modified (as shown in Figure 11).

    Screenshot of a computer screen displaying code in JSON format with various keys and values, showing white text on a black background.
    Figure 11. The copilot follows the malicious tool invocation request, invokes the writeFile tool and writes a .tmp file to the local folder.

    Impact: This attack demonstrates the potential for malicious file operations, data exfiltration, persistence mechanisms and other unauthorized system modifications. This is all performed without explicit user consent.

    Detecting and Preventing Prompt Injection in MCP Sampling

    Detection focuses on identifying malicious patterns in both sampling requests and LLM responses.

    • On the request side, systems should scan for injection markers like [INST], System:, role-play attempts (“You are now”) and hidden content using common injection strategies such as zero-width characters or Base64 encoding.
    • On the response side, detection involves monitoring for unexpected tool invocations, embedded meta-instructions (“For all future requests…”) and outputs that attempt to modify client behavior. Statistical analysis provides another layer by flagging requests that exceed normal token usage patterns or exhibit an unusually high frequency of sampling requests. Responses should also be inspected for references to malicious domains or exploits that can compromise the agent.

    Prevention requires implementing multiple defensive layers before malicious prompts can cause harm. Request sanitization forms the first line of defense:

    • Enforce strict templates that separate user content from server modifications
    • Strip suspicious patterns and control characters
    • Impose token limits based on operation type

    Response filtering acts as the second barrier by removing instruction-like phrases from LLM outputs and requiring explicit user approval for any tool execution.

    Access controls provide structural protection through capability declarations that limit what servers can request, context isolation that prevents access to conversation history, and rate limiting that caps sampling frequency.

    Palo Alto Networks offers products and services that can help organizations protect AI systems:

    If you think you may have been compromised or have an urgent matter, get in touch with the Unit 42 Incident Response team or call:

    • North America: Toll Free: +1 (866) 486-4842 (866.4.UNIT42)
    • UK: +44.20.3743.3660
    • Europe and Middle East: +31.20.299.3130
    • Asia: +65.6983.8730
    • Japan: +81.50.1790.0200
    • Australia: +61.2.4062.7950
    • India: 00080005045107

    Palo Alto Networks has shared these findings with our fellow Cyber Threat Alliance (CTA) members. CTA members use this intelligence to rapidly deploy protections to their customers and to systematically disrupt malicious cyber actors. Learn more about the Cyber Threat Alliance.

    Additional Resources

    • OpenAI Content Moderation – Docs, OpenAI
    • Content filtering overview – Documentation, Microsoft Learn
    • Google Safety Filter – Documentation, Generative AI on Vertex AI, Google
    • Nvidia NeMo-Guardrails – NVIDIA on GitHub
    • AWS Bedrock Guardrail – Amazon Web Services
    • Meta Llama Guard 2 – PurpleLlama on GitHub
    • Introducing the Model Context Protocol – Anthropic News
    • OpenAI adopts rival Anthropic’s standard for connecting AI models to data – TechCrunch
    • Model Context Protocol – Wikipedia
    • What is the Model Context Protocol (MCP)? – Documentation, Model Context Protocol
    • Model Context Protocol (MCP) an overview – Personal Blog, Philipp Schmid
    • Model Context Protocol (MCP) Explained – Diamond AI Substack, Nir Diamant
    • Sampling – Documentation, Model Context Protocol
    • MCP 101: An Introduction to Model Context Protocol – DigitalOcean Community, DigitalOcean
    • The current state of MCP (Model Context Protocol) – Elasticsearch Labs, Elastic

    Continue Reading

  • CRH, Carvana and Comfort Systems USA Set to Join S&P 500; Others to Join S&P MidCap 400 and S&P SmallCap 600

    CRH, Carvana and Comfort Systems USA Set to Join S&P 500; Others to Join S&P MidCap 400 and S&P SmallCap 600

    NEW YORK, Dec. 5, 2025 /PRNewswire/ — S&P Dow Jones Indices (“S&P DJI”) will make the following changes to the S&P 500, S&P MidCap 400, and S&P SmallCap 600 indices effective prior to the open of trading on Monday, December 22, to coincide with the quarterly rebalance. The changes ensure that each index is more representative of its market capitalization range. The companies being removed from the S&P SmallCap 600 are no longer representative of the small-cap market space. 

    Following is a summary of the changes that will take place prior to the open of trading on the effective date:

    Effective Date

    Index Name

    Action

    Company Name

    Ticker

    GICS Sector

    Dec 22, 2025 

    S&P 500

    Addition

    CRH

    CRH

    Materials

    Dec 22, 2025 

    S&P 500

    Addition

    Carvana

    CVNA

    Consumer Discretionary

    Dec 22, 2025 

    S&P 500

    Addition

    Comfort Systems USA

    FIX

    Industrials

    Dec 22, 2025 

    S&P 500

    Deletion

    LKQ

    LKQ

    Consumer Discretionary

    Dec 22, 2025 

    S&P 500

    Deletion

    Solstice Advanced Materials

    SOLS

    Materials

    Dec 22, 2025 

    S&P 500

    Deletion

    Mohawk Industries

    MHK

    Consumer Discretionary

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    UL Solutions

    ULS

    Industrials

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    Pinterest

    PINS

    Communication Services

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    Booz Allen Hamilton Holding

    BAH

    Industrials

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    SPX Technologies

    SPXC

    Industrials

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    Dycom Industries

    DY

    Industrials

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    Borgwarner

    BWA

    Consumer Discretionary

    Dec 22, 2025 

    S&P MidCap 400

    Addition

    Hecla Mining Co

    HL

    Materials

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Comfort Systems USA

    FIX

    Industrials

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Under Armour A

    UAA

    Consumer Discretionary

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Under Armour C

    UA

    Consumer Discretionary

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Power Integrations

    POWI

    Information Technology

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Perrigo Company

    PRGO

    Health Care

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Iridium Communications

    IRDM

    Communication Services

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Marriott Vacations Worldwide

    VAC

    Consumer Discretionary

    Dec 22, 2025 

    S&P MidCap 400

    Deletion

    Insperity

    NSP

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Primoris Services

    PRIM

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Casella Waste Systems

    CWST

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Indivior

    INDV

    Health Care

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Hawaiian Electric Industries

    HE

    Utilities

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    LKQ

    LKQ

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Solstice Advanced Materials

    SOLS

    Materials

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Mohawk Industries

    MHK

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Under Armour A

    UAA

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Under Armour C

    UA

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Power Integrations

    POWI

    Information Technology

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Perrigo Company

    PRGO

    Health Care

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Iridium Communications

    IRDM

    Communication Services

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Marriott Vacations Worldwide

    VAC

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Addition

    Insperity

    NSP

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Deletion

    SPX Technologies

    SPXC

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Deletion

    Dycom Industries

    DY

    Industrials

    Dec 22, 2025 

    S&P SmallCap 600

    Deletion

    Borgwarner

    BWA

    Consumer Discretionary

    Dec 22, 2025 

    S&P SmallCap 600

    Deletion

    Hecla Mining Co

    HL

    Materials

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    Ready Capital

    RC 

    Financials 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    SITE Centers

    SITC 

    Real Estate 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    Thryv Holdings

    THRY 

    Communication Services 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    Helen of Troy

    HELE 

    Consumer Discretionary 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    AdvanSix

    ASIX 

    Materials 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    Sturm Ruger & Co

    RGR 

    Consumer Discretionary 

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    MGP Ingredients

    MGPI

    Consumer Staples

    Dec 22, 2025 

    S&P SmallCap 600 

    Deletion

    Ceva

    CEVA 

    Information Technology 

    Dec 22, 2025

    S&P SmallCap 600 

    Deletion

    Shoe Carnival

    SCVL

    Consumer Discretionary

    ABOUT S&P DOW JONES INDICES

    S&P Dow Jones Indices is the largest global resource for essential index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500® and the Dow Jones Industrial Average®. More assets are invested in products based on our indices than products based on indices from any other provider in the world. Since Charles Dow invented the first index in 1884, S&P DJI has been innovating and developing indices across the spectrum of asset classes helping to define the way investors measure and trade the markets.

    S&P Dow Jones Indices is a division of S&P Global (NYSE: SPGI), which provides essential intelligence for individuals, companies, and governments to make decisions with confidence. For more information, visit www.spglobal.com/spdji/en/. 

    FOR MORE INFORMATION:

    S&P Dow Jones Indices
    index_services@spglobal.com

    Media Inquiries
    spdji.comms@spglobal.com

    SOURCE S&P Dow Jones Indices

    Continue Reading

  • SUO 2025: Topline Results from BOND-003 Cohort P – A Multinational , Single-arm Study of Intravesical Cretostimogene Grenadenorepvec for Treatment of High-risk, Papillary-only, BCG-unresponsive NMIBC – UroToday

    1. SUO 2025: Topline Results from BOND-003 Cohort P – A Multinational , Single-arm Study of Intravesical Cretostimogene Grenadenorepvec for Treatment of High-risk, Papillary-only, BCG-unresponsive NMIBC  UroToday
    2. CG Oncology Reports Promising Efficacy and Safety Data for Cretostimogene in High-Risk Non-Muscle Invasive Bladder Cancer Trials  Quiver Quantitative
    3. BOND-003: Cretostimogene yields durable 24-month responses in high-grade NMIBC  Urology Times
    4. CG Oncology announces data from BOND-003 Cohort P, CORE-008 Cohort A  TipRanks
    5. Cohort P Data from the BOND-003 Study in BCG-Unresponsive Papillary Bladder Cancer – Mark Tyson  UroToday

    Continue Reading

  • How AI Is Transforming Retail

    How AI Is Transforming Retail

    Michelle Weaver: Welcome to Thoughts on the Market. We’re coming to you live from Morgan Stanley’s Global Consumer and Retail Conference in New York City, where we have more than 120 leading companies in attendance.

     

    Today’s episode is the second part of our live discussion of the U.S. consumer and how AI is changing consumer companies. With me on stage, we have Arunima Sinha from the Global and U.S. Economics team, Simeon Guttman, our U.S. Hardlines, Broad Lines, and Food Retail Analyst, and Megan Clapp, U.S. Food Producers and Leisure Analyst.

     

    It’s Friday, December 5th at 10am in New York.

     

    So, Simeon, I want to start with you. You recently put out a piece assessing the AI race. Can you take us through how you’re assessing current AI implementation? And can you give us some real-world examples of what it looks like when a company significantly integrates AI into their business?

     

    Simeon Gutman: Sure. So, the Consumer Discretionary and Staples teams went to each of their covered companies, and we started searching for what those companies have disclosed and communicated regarding their AI. In some cases, we used AI to do this search. But we created a search and created this universe of factors and different ways AI is being implemented. We didn’t have a framework until we had the entire universe of all of these AI use cases.

     

    Once we did, then we were able to compartmentalize them. And the different groups; we came up with six groups that we were able to cluster. First, personalization and refined search; second, customer acquisition; third product innovation; fourth, labor productivity; fifth, supply chain and logistics. And lastly, inventory management. And using that framework, we were able to rank companies on a 1 to 10 scale.

     

    Across – that was the implementation part – across three different dimensions: breadth, how widely the AI is deployed across those categories; the depth, the quality, which we did our best to be able to interpret. And then the last one was proprietary initiatives. So, that’s partnerships, could be with leading AI firms.

     

    So that helped us differentiate the leaders with others, not necessarily laggards, but those who were ahead of in the race. In some cases, companies that have communicated more would naturally scream more, so there is some potential bias in that. But otherwise, the fact pattern was objective.

     

    Walmart has full scale AI deployment. They’re integrated across their business. They’ve introduced GenAI tools. That’s like their Sparky shopping assistant. As well as integrated to in-store features. They talked about it. It’s been driving a 25 percent increase in average shopper spend. They’ve recently partnered with OpenAI to enable ChatGPT powered Search and Checkout, positioning where the company, where the customer is shopping.

     

    They’re also layering on augmented reality for holiday shopping, computer vision for shelf monitoring. LLMs for inventory replenishment. Autonomous lifts, the list goes on and on. But it covers all the functional categories in our framework.

     

    Michelle Weaver: And how about a couple examples of the ways companies are using these? Any interesting real world use cases you’ve seen so far?

     

    Simeon Gutman: So, one of them was in marketing personalization, as well as in product cataloging. That was one of the more sided themes at this conference. So, it was good timing. So, the idea is when product is staged on a company’s website; I don’t think we all appreciate how much time and many hours and people and resources it takes to get the correct information, to get the right pictures and to show all the assortment – those type of functions AI is helping enable.

     

    And it sounds like we’re on the cusp of a step change in personalization. It sounds like AI, machine learning or algorithm driven suggestions to consumers. We didn’t get practical use cases, but a lot of companies talked about the deployment of this into 2026, which sounds like it’s something to look forward to.

     

    Michelle Weaver: And Megan, how would you describe AI adoption in your space in terms of innings and what kind of criteria are you using to assess the future for AI opportunity and potential?

     

    Megan Clapp: Yeah, I would say; I’d characterize adoption in the Food and broader Staples space today is still relatively early innings. I think most companies are still standing up the data infrastructure, experimenting with various tools. We’re seeing companies pilot early use cases and start to talk about them, and that was evident in the work we did with the note that Simeon just talked about.

     

    And so, the opportunity, I think, going ahead, lies in kind of what we see in terms of scaling those pilots to become more impactful. And for Staples broadly, and Food, you know, ties into this. I think, these companies start with an advantage and that they sit on a tremendous amount of high frequency consumption data. So, the data availability is quite large. The question now is, you know, can these large organizations move with speed and translate that data into action? And that’s something that we’re focused on when we think about feasibility.

     

    I think we think about the opportunity for Food and Staples broadly as we’d put it into kind of two areas. One is what can they do on the top line? Marketing, innovation, R&D, kind of the lifeblood of CPG companies, and that’s where we’re seeing a lot of the early use cases. I think ultimately that will be the most important driver – driving top line, you know, tends to be the most important thing in most consumer companies.

     

    But then on the other side, there are a lot of cost efforts, supply chain savings, labor productivity. Those are honestly a bit easier to quantify. And we’re seeing real tangible things come out of that. But overall I think the way we think about it is the large companies with scale and the ability to go after the opportunity because they have the scale and the balance sheet to do so – will be winners here, as well as the smaller, more nimble companies that, you know, can move a little bit faster. And so that’s how we’re thinking about the opportunity.

     

    Michelle Weaver: Can you give us also just a couple examples of AI adoption that’s been successful that you’ve seen so far?

     

    Megan Clapp: Yeah, so on the top line side, like I said, kind of marketing innovation, R&D. One quick example on the Food side. Hershey, for example, they’re using algorithms to reallocate advertising spend by zip code, based on the real time sell through. So, they can just be much more targeted and more efficient, honestly, with that advertising spend. I think from an innovation perspective too, these companies are able to identify on trend things faster and incorporate that and take the idea to shelf time down significantly.

     

    And then on the cost side, you know, General Mills is a company is actually relatively, far ahead, I’d say, in the AI adoption curve in Staples broadly. And what they’ve done is deployed what they call digital twins across their network, and it has improved forecast accuracy. They’ve taken their historical productivity savings from 4 percent annually to 5 percent. That’s something that’s structural. So, seeing real tangible benefits that are showing up in the PNL. And so, I think broadly the theme is these companies are using AI to make faster, and more precise decisions.

     

    And then I thought, I’d just mention on the leisure side, something that I felt was interesting that we learned from Shark Ninja yesterday at the conference is – when asked about the role of Agentic AI in future commerce, thinks it’ll be huge was how he described; the CEO described it. And what they’re doing actively right now is optimizing their D2C website for LLMs like ChatGPT and Gemini. And his point was that what drives conversion on D2C today may not ultimately be what ranks on AI driven search.

     

    But he said the expectation is that by Christmas of next year, commerce via these AI platforms will be meaningful; mentioned that OpenAI is already experimenting with curated product transactions. So, they’re really focused on optimizing their portfolio. He thinks brands will win; but you have got to get ahead of it as well.

     

    Michelle Weaver: And that’s great that you just brought up Agentic commerce. We’ve heard about it quite a bit over the past couple of days, Simeon. And I know you recently put out a big piece on this theme.

     

    Agentic commerce introduces a lot of possibility for incremental sales, but it also introduces the possibility for cannibalization. Where do you see this shaking out in your space? Are you really concerned about that cannibalization possibility?

     

    Simeon Gutman: Yeah, so the larger debate is a little bit of sales cannibalization and a potential bit of retail media cannibalization. So, your first point is Agentic theoretically opens up a bigger e-commerce penetration and just more commerce. And once you go to more e-commerce, that could be beneficial for some of these companies.

     

    We can also put the counter argument of when e-commerce came, direct-to-consumer type of selling could disintermediate the captive retailer sales again. Maybe, maybe not. Part of this answer is we created a framework to think about what retailers can protect themselves most from this. Two of them; two of the five I’s are infrastructure and inventory. So, the more that your inventory is forward position, the more infrastructure you have; the AI and the agent will still prioritize that retailer within that network. That business will likely not go elsewhere. And that’s our premise.

     

    Now, retail media is a different can of worms. We don’t know what models are going to look like.

     

    How this interaction will take place? We don’t know who controls the data. The transactions part of this conference is we were hearing, ‘Well, the retailers are going to control some of the data and the transaction.’ Will consumers feel comfortable giving personal information, credit card to agents? I’m sure at some point we’ll feel comfortable, but there are these inertia points and these are models that are getting worked out today.

     

    There’s incentives for the hyperscalers to be part of this. There’s incentive for the retailers to be part of it. But we ultimately don’t know. What we do know is though forward position inventory is still going to win that agent’s business if you need to get merchandise quickly, efficiently. And if it’s a lot of merchandise at once. Think about the largest platforms that have been investing in long tail of product and speed to getting it to that consumer.

     

    Michelle Weaver: And Arunima, I want to bring this back to the macro as well. As AI adoption starts to ramp the labor market then starts to get called into question. Is this going to be automation or is it going to be augmentation as you see a ramp in AI adoption?

     

    So how are your expectations for AI being factored into your forecast and what are you expecting there?

     

    Arunima Sinha: There are two ways that we think about just sort of AI spending mattering for our growth forecasts. One part is literally the spend, the investment in the data centers and the chips and so on. And then the other is just the rise in productivity. So, does the labor or does the human capital become more productive?

     

    And if we sum both of those things together, we think that over 2026 – [20]27, they add anywhere between 40-45 basis points to growth. And just to put things in perspective, our GDP growth estimate for the end of this year in 2026 is 1.8 percent. For 2027, it’s 2.0 percent. So, it’s an important part of that process.

     

    In terms of the labor market itself, the work that you have led, as well as the work that we’ve been doing – which is this question about adoption at the macro level, that’s still fairly low. We look at the census data that tracks larger companies or mid-size companies on a monthly basis to say, ‘How much did you use AI tools in the last couple of weeks.’ And that’s been slowly increasing, but it’s still sort of in the mid-teens in terms of how many companies have been using as a percentage.

     

    And so, we think that adoption should continue to increase. And as that does, for now, we think it is going to be a compliment to labor. Although there are some cohorts within sort of demographic cohorts in terms of ages that are probably going to be disproportionately impacted, but we don’t think that that’s a sort of near term 2026 story.

     

    Michelle Weaver:  Well, thank you all for joining us and please follow Thoughts on the Market wherever you listen to podcasts.

     

    Thank you to our panel participants for this engaging discussion and to our live and podcast audiences. Thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.

     

     

    Continue Reading

  • Jumped the gun, says Morgan Stanley; reverses Dec Fed rate call to 25bps cut – Reuters

    1. Jumped the gun, says Morgan Stanley; reverses Dec Fed rate call to 25bps cut  Reuters
    2. Hassett: Time for Fed to cautiously reduce rates; expect shutdown had bigger hit than expected; there’ll be bigger rebound in first quarter  Forex Factory
    3. “We jumped the gun”: Morgan Stanley reverts back to call for December Fed rate cut  Investing.com
    4. Inflation gives the green light: US rate cut expectations jump to 87.2%  المتداول العربي
    5. A Fed Rate Cut May Be Coming—But At What Cost  Investopedia

    Continue Reading

  • 4 Ways To Thrive Financially in the Age of AI

    4 Ways To Thrive Financially in the Age of AI

    From shopping to applying for jobs to getting advice, artificial intelligence is transforming the way people live. No matter if you see it as a positive or negative, it’s here to stay. AI presents several golden opportunities for Americans to improve their financial futures. Here are four ways AI can give you an edge.

    Check Out: The ChatGPT Grocery Shopping Hack That Saves Retirees $100 or More per Month

    Read Next: 6 Safe Accounts Proven To Grow Your Money Up To 13x Faster

    Around the globe, more and more people are worrying that AI will affect their employment. Doug McMillon, the CEO of Walmart, agrees in part and believes AI will reshape every job he can think of. While he does admit Walmart will use more AI chatbots in areas like supply chain tracking and customer service, this doesn’t mean it will replace every employee.

    McMillon believes workers who adapt to AI tools will be more productive and valuable to employers. Early reports indicate that companies that have replaced employees with AI have experienced problems, with frequent mistakes and inconsistencies from the tech. Learning how to pair AI with soft skills like communication and critical thinking can make you an asset to any company.

    Find Out: I’m a Self-Made Millionaire: 6 Ways I Use ChatGPT To Make a Lot of Money

    When it comes to budgeting, AI tools can handle many tedious tasks, such as filling in and categorizing your transactions. The AI can then provide insights showing where your money is going and how you can make better use of what you have. You can even enter your financial goals, such as building up an emergency fund or getting out of credit card debt, and AI can find the best strategy for you and keep you on track to reach them.

    Financially successful people and experts alike agree that making wise investment decisions is the key to unlocking lasting wealth. In the past, you’d need time, patience and good instincts to think a few steps ahead of the market. Using AI can simplify the process and produce quick insights to aid in your decision-making. Advanced AI algorithms can quickly analyze data sets to forecast how markets will react to news and events in real time. It can also improve your portfolio management and automate trades.

    One specific type of AI you can use to improve your investing strategy is robo-advisors. These ask you questions to understand your long-term goals, risk tolerance and timeline to aid your decision-making. While robo-advisors charge fees, unlike popular free AI chatbots, they can offer much better advice and cost much less than hiring a professional financial advisor.

    Continue Reading

  • First utility-owned geothermal network to double in…

    First utility-owned geothermal network to double in…

    Progress on the project is a further indicator that despite their opposition to wind and solar, the Trump administration and Republicans in Congress appear to back geothermal energy.

    President Donald Trump issued an executive order on his first day in office declaring an energy emergency that expressed support for a limited mix of energy resources, including fossil fuels, nuclear power, biofuels, hydropower, and geothermal energy.

    The One Big Beautiful Bill Act, passed by Republicans and signed by Trump in July, quickly phases out tax credits for wind, solar, and electric vehicles. However, the bill left geothermal heating and cooling tax credits approved under the Inflation Reduction Act of 2022 largely intact.

    The fact that geothermal is on this administration’s agenda is pretty impactful,” said Nikki Bruno, vice president for thermal solutions and operational services at Eversource. It means they believe in it. It’s a bipartisan technology.”

    Plans for the expansion project call for roughly doubling Framingham’s geothermal network capacity at approximately half the cost of the initial buildout. Part of the estimated cost savings will come from using existing equipment rather than duplicating it.

    You’ve already got all the pumping and control infrastructure installed, so you don’t need to build a new pump house,” said Eric Bosworth, a geothermal expert who runs the consultancy Thermal Energy Insights. Bosworth oversaw the construction of the initial geothermal network in Framingham while working for Eversource.

    The network’s efficiency is anticipated to increase as it grows, requiring fewer boreholes to expand. That improvement is due to the different heating and cooling needs of individual buildings,​which increasingly balance one another out as the network expands, Magavi said.

    The project still awaits approval from state regulators, with Eversource aiming to start construction by the end of 2026, Bruno said. 

    What we’re witnessing is the birth of a new utility,” Magavi said. Geothermal networks can help us address energy security, affordability and so many other challenges.”

    {
    if ($event.target.classList.contains(‘hs-richtext’)) {
    if ($event.target.textContent === ‘+ more options’) {
    $event.target.remove();
    open = true;
    }
    }
    }”
    >

    Continue Reading

  • SpaceX to offer insider shares at record-setting valuation

    SpaceX to offer insider shares at record-setting valuation

    SpaceX is preparing to sell insider shares in a transaction that would value Elon Musk’s rocket and satellite maker at a valuation higher than OpenAI’s record-setting $500 billion, people familiar with the matter said.

    One of the people briefed on the deal said that the share price under discussion is higher than $400 apiece, which would value SpaceX at between $750 billion and $800 billion, though the details could change. 

    The company’s latest tender offer was discussed by its board of directors on Thursday at SpaceX’s Starbase hub in Texas. If confirmed, it would make SpaceX once again the world’s most valuable closely held company, vaulting past the previous record of $500 billion that ChatGPT owner OpenAI set in October. Play Video

    Preliminary scenarios included per-share prices that would have pushed SpaceX’s value at roughly $560 billion or higher, the people said. The details of the deal could change before it closes, a third person said. 

    A representative for SpaceX didn’t immediately respond to a request for comment. 

    The latest figure would be a substantial increase from the $212 a share set in July, when the company raised money and sold shares at a valuation of $400 billion.

    The Wall Street Journal and Financial Times, citing unnamed people familiar with the matter, earlier reported that a deal would value SpaceX at $800 billion.

    News of SpaceX’s valuation sent shares of EchoStar Corp., a satellite TV and wireless company, up as much as 18%. Last month, Echostar had agreed to sell spectrum licenses to SpaceX for $2.6 billion, adding to an earlier agreement to sell about $17 billion in wireless spectrum to Musk’s company.

    Subscribe Now: The Business of Space newsletter covers NASA, key industry events and trends.

    The world’s most prolific rocket launcher, SpaceX dominates the space industry with its Falcon 9 rocket that launches satellites and people to orbit.

    SpaceX is also the industry leader in providing internet services from low-Earth orbit through Starlink, a system of more than 9,000 satellites that is far ahead of competitors including Amazon.com Inc.’s Amazon Leo.

    SpaceX executives have repeatedly floated the idea of spinning off SpaceX’s Starlink business into a separate, publicly traded company — a concept President Gwynne Shotwell first suggested in 2020. 

    However, Musk cast doubt on the prospect publicly over the years and Chief Financial Officer Bret Johnsen said in 2024 that a Starlink IPO would be something that would take place more likely “in the years to come.”

    The Information, citing people familiar with the discussions, separately reported on Friday that SpaceX has told investors and financial institution representatives that it is aiming for an initial public offering for the entire company in the second half of next year.

    A so-called tender or secondary offering, through which employees and some early shareholders can sell shares, provides investors in closely held companies such as SpaceX a way to generate liquidity.

    SpaceX is working to develop its new Starship vehicle, advertised as the most powerful rocket ever developed to loft huge numbers of Starlink satellites as well as carry cargo and people to moon and, eventually, Mars.

    Continue Reading

  • US Stocks Hold Onto Gains as Fed Countdown Begins: Markets Wrap

    US Stocks Hold Onto Gains as Fed Countdown Begins: Markets Wrap

    (Bloomberg) — The stock market crept higher, but stopped short of records Friday, as traders refrained from making big bets ahead of the Federal Reserve’s interest-rate cut decision next week. Treasuries notched their worst week since June.

    The S&P 500 notched small gains and remains within a whisker of October’s all-time high. The Nasdaq 100 advanced 1% this week while the Russell 2000 gauge of smaller companies pulled back from Thursday’s closing record. Treasuries extended losses with the yield on the 10-year climbing roughly four basis points to 4.14%.

    A dated reading of the Federal Reserve’s preferred inflation gauge did little to shift Wall Street’s expectations of a rate cut on Wednesday with swaps bets pointing to further easing into 2026.

    Subscribe to the Stock Movers Podcast on Apple, Spotify and other Podcast Platforms.

    The core personal consumption expenditures price index, a measure that excludes food and energy, rose 0.2% in September, inline with economists expectations for a third-straight 0.2% increase in the Fed’s favored core index. That would keep the year-over-year figure hovering a little below 3%, a sign that inflationary pressures are stable, yet sticky.

    “Overall, the data was consistent with another 25 basis point Fed cut next week, but it doesn’t suggest any urgency for the Fed to accelerate the pace of cuts in 2026,” said BMO’s Ian Lyngen.

    A December rate cut is a not given for every Fed watcher. BlackRock CIO of Global Fixed Income Rick Rieder told Bloomberg Television before the data that he is expecting some dissents and disagreement at the next meeting.

    Meanwhile, sentiment toward technology stocks got a boost after Nvidia Corp. partner Hon Hai Precision Industry Co. reported strong sales. Moore Threads Technology Co., a leading Chinese AI chipmaker, jumped 425% in its Shanghai trading debut. Shares of Netflix Inc. slid after agreeing to a tie-up with Warner Bros. Discovery Inc.

    In a sign that institutional appetite for the world’s largest cryptocurrency remains subdued, BlackRock Inc.’s iShares Bitcoin Trust ETF (IBIT) recorded its longest streak of weekly withdrawals since debuting in January 2024.

    Investors pulled more than $2.7 billion from the exchange-traded fund over the five weeks to Nov. 28, according to data compiled by Bloomberg. With an additional $113 million of redemptions on Thursday, the ETF is now on pace for a sixth straight week of net outflows. A drop in Bitcoin deepened, falling below $90,000 on Friday.

    What Bloomberg Strategists say…

    There are two things stand in the way of a year-end rally — and both are on display today. One is the third downdraft in crypto prices in the last two weeks, which has sent Bitcoin back below $90,000. Such a pullback served to dampen risk sentiment on two previous occasions in November.

    —Edward Harrison, Macro Strategist, Markets Live

    For the full analysis, click here.

    WTI crude steadied around $60 a barrel. Gold erased earlier gains.

    Corporate News

    SpaceX is preparing to sell insider shares in a transaction that would value Elon Musk’s rocket and satellite maker at a valuation higher than OpenAI’s record-setting $500 billion, people familiar with the matter said. SoftBank Group Corp. is in talks to acquire DigitalBridge Group Inc., a private equity firm that invests in assets such as data centers, to take advantage of an AI-driven boom in digital infrastructure. Netflix Inc. agreed to buy Warner Bros. Discovery Inc. in a historic combination, joining the world’s dominant paid streaming service with one of Hollywood’s oldest and most revered studios. Moore Threads Technology Co., a leading Chinese artificial intelligence chipmaker, soared as much as 502% in its Shanghai debut after raising 8 billion yuan ($1.13 billion) in an IPO. Nvidia Corp. would be barred from shipping advanced artificial intelligence chips to China under bipartisan legislation unveiled Thursday in a bid to codify existing US restrictions on exports of advanced semiconductors to the Chinese market. Some of the main moves in markets:

    Stocks

    The S&P 500 rose 0.2% as of 4:03 p.m. New York time The Nasdaq 100 rose 0.4% The Dow Jones Industrial Average rose 0.2% The MSCI World Index was little changed Currencies

    The Bloomberg Dollar Spot Index fell 0.2% The euro was little changed at $1.1645 The British pound was little changed at $1.3335 The Japanese yen fell 0.1% to 155.30 per dollar Cryptocurrencies

    Bitcoin fell 3% to $89,405.75 Ether fell 2.9% to $3,033.32 Bonds

    The yield on 10-year Treasuries advanced four basis points to 4.14% Germany’s 10-year yield advanced three basis points to 2.80% Britain’s 10-year yield advanced four basis points to 4.48% Commodities

    West Texas Intermediate crude rose 0.8% to $60.12 a barrel Spot gold fell 0.2% to $4,200.69 an ounce This story was produced with the assistance of Bloomberg Automation.

    –With assistance from Andre Janse van Vuuren, Levin Stamm, Neil Campling and Sidhartha Shukla.

    ©2025 Bloomberg L.P.

    Continue Reading

  • Global Airlines Outlook Neutral Amid Resilient Demand – Fitch Ratings

    1. Global Airlines Outlook Neutral Amid Resilient Demand  Fitch Ratings
    2. Aviation Flies Into 2026 With Good Prospects Despite Challenges  Energy Intelligence
    3. CITIC Securities Aviation 2026 Investment Strategy: Focus on Airlines’ Profit Inflection Points; Reconstruction of Prosperity Cycle May Be Imminent  富途牛牛

    Continue Reading