WASHINGTON — The Department of the Treasury and the Internal Revenue Service today provided guidance on the “No Tax on Car Loan Interest” provision enacted under the One, Big, Beautiful Bill.
The proposed regulations issued today relate to a new deduction for interest paid on vehicle loans incurred after Dec. 31, 2024, to purchase new made-in-America vehicles for personal use. This new tax benefit applies to both taxpayers who take the standard deduction and those who itemize deductions.
Who can take a deduction for interest on car loans
To help taxpayers take advantage of this new tax benefit, today’s guidance addresses important eligibility criteria, including:
Providing rules relating to new vehicles eligible for the deduction, including for determining if the final assembly of a vehicle occurred in the United States;
Providing rules for determining which vehicle loans qualify and the amount of interest paid on a loan that may be deductible;
Providing rules for determining if a new vehicle is purchased for personal use; and
Identifying taxpayers who can take the deduction and clarifying the $10,000 annual deduction limit.
What lenders need to know
The IRS previously announced transition guidance for certain lenders and other taxpayers receiving interest for vehicle loans in 2025. In general, those persons must file information returns with the IRS to report interest received during the tax year and other information related to the loan. These information returns enable taxpayers to claim the benefits of the vehicle loan interest deduction. To help lenders implement these information reporting requirements, the proposed regulations clarify:
Which lenders and other interest recipients are required to report and the time and manner for this reporting; and
What information must be included on the form provided to the IRS and to taxpayers.
More information
Treasury and IRS invite comments from the public on these proposed regulations by Feb. 2, 2026. Comments can be submitted through Regulations.gov and instructions can be found in the proposed regulations.
For more information, see One, Big, Beautiful Bill provisions on IRS.gov.
A federal judge has approved an order requiring Disney to pay $10 million to settle Federal Trade Commission allegations that the company allowed personal data to be collected from children who viewed kid-directed videos on YouTube without notifying parents or obtaining their consent as required by the Children’s Online Privacy Protection Rule (COPPA Rule).
A complaint, filed in September by the Department of Justice upon notification and referral from the FTC, alleged that Disney Worldwide Services, Inc. and Disney Entertainment Operations LLC (Disney) violated the COPPA Rule by failing to properly label some videos that it uploaded to YouTube as “Made for Kids” (MFK). The complaint alleged that by mislabeling these videos, Disney allowed for the collection, through YouTube, of personal data from children under 13 who viewed child-directed videos and use of that data for targeted advertising to children.
Under the settlement order finalized by a federal judge last week, Disney is required to:
Pay a $10 million civil penalty for violating the COPPA Rule;
Comply with the COPPA Rule, including by notifying parents before collecting personal information from children under 13 and obtaining verifiable parental consent for collection and use of that data; and
Establish and implement a program to review whether videos posted to YouTube should be designated as MFK—unless YouTube implements age assurance technologies that can determine the age, age range, or age category of all YouTube users or no longer allows content creators to label videos as MFK. This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.
From 2 January 2026, we’ll be updating our parking arrangements for patients and visitors. This includes new time bands and an increase to our current parking charges.
We know that any change to parking can feel frustrating, and this is not a decision we’ve taken lightly. This is our first increase in many years, and the new rates will bring us in line with other local hospitals.
We also understand that parking at our hospital can be challenging at busy times, and that we do not currently have enough spaces to meet the demand. By updating our charges, we can reinvest directly into improving the patient and visitor experience, including maintenance, safety, and longer-term plans to make accessing our hospital easier for everyone.
Concessions for certain patients and their relatives, such as Blue Badge holders (with a valid badge displayed), or parents of a child staying overnight, will remain in place. If you are on a low income you may be entitled to claim back the cost of your parking under the national Healthcare Travel Costs Scheme (HTCS) which helps individuals with the reasonable costs of travel to and from NHS appointments and treatments.
From 2 January 2026 the charges will apply Monday to Sunday 8am to 7.30pm.
Up to 30 minutes FREE Up to 2 hours £3.50 Up to 4 hours £5.50 Up to 6 hours £7.50 Up to 8 hours £9.00 Over 8 hours £10.00.
You can pay for your parking by cash, card, Google Pay or Apple Pay. QR code posters around the hospital can also be used to pay or top up your parking.
Thank you for your understanding and support as we make these important changes to improve your experience at Queen Victoria Hospital.
In the bowels of the US Federal Reserve this summer, two of the world’s most powerful men, sporting glistening white hard hats, stood before reporters looking like students forced to work together on a group project.
Allies of Donald Trump had spent weeks trying to manufacture a scandal around ongoing renovations of the central bank’s Washington headquarters and its costs. Now here was the US president, on a rare visit, examining the project for himself.
“It looks like it’s about $3.1bn. It went up a little bit – or a lot,” Trump said, as Jerome Powell, the typically calm Fed chair, vigorously shook his head. “So the $2.7bn is now $3.1bn–”
“I’m not aware of that, Mr President,” Powell quickly interjected, as Trump pulled out a paper from his suit pocket as evidence. “I haven’t heard that from anybody at the Fed.”
The remarkable public encounter in late July was described as a “tussle”, “spar” and “feud” by news outlets and came to symbolize an extraordinary battle for control of the world’s largest economy.
Never before has a president been so publicly, and relentlessly, critical of the country’s top monetary policymaker. For decades, successive administrations have allowed the Fed, as the institution tasked with steering the US economy, to function independently, without political interference. No longer.
Trump, who later embarked upon a vast construction of his own, with the surprise demolition of the entire East Wing of the White House for a new ballroom, continues to threaten legal action over the Fed renovations, and direct seemingly unlimited anger toward Powell.
Few believe this historic test of the central bank’s independence has run its course. In fact, it is widely expected to intensify in the coming months.
“The institution was built to withstand a moment like this,” said Claudia Sahm, a former Fed economist. “Some of the battle lines were drawn, but we haven’t seen this play all the way out.”
‘The president should at least have a say’
The US economy has had a rollercoaster year. Trump’s widespread tariffs and crackdown on immigration destabilized prices and the labor market – the two domains the Fed aims to protect, and balance, by setting interest rates. Higher interest rates can calm inflation, but also risk raising unemployment. Lowering rates can stimulate economic growth, but risk higher prices.
At the beginning of the year, it looked like the Fed had achieved a so-called “soft landing”: inflation had fallen significantly from the 40-year high it scaled in 2022, during the feverish post-pandemic economy, but the labor market had remained broadly stable. Interest rates went from near zero to 5.25% to 5.5%.
Wall Street was nevertheless anxious for rates to come down. On the campaign trail, Trump had made clear that he would want a say over how things are run at the Fed, and repeatedly suggested that he would pressure the central bank to lower rates.
“I feel the president should at least have say in there. I feel that strongly,” he said in August 2024. “In my case, I made a lot of money, I was very successful, and I think I have a better instinct than, in my cases, people that would be on the Federal Reserve, or the chairman.”
Interest rates in the US
‘Too Late Powell’
Once he retook office, Trump’s laser focus on lowering rates only intensified. When his “Liberation Day” tariff announcement in April caused stock markets to plummet, he raged against the Fed for not acting quick enough.
“If I want him out, he’ll be out of there real fast, believe me,” Trump said of Powell in mid-April. Markets did not respond kindly to the threat, and Trump eventually said that Powell’s job was safe.
It was the beginning, however, of what would be a long summer of presidential attacks against the Fed. Trump continued to blast “Too Late Powell” on social media, and attempted to draw the central bank’s renovations into the spotlight.
Once it was clear those strategies weren’t working, the White House changed course. In August, when Trump announced he would fire Lisa Cook, a Biden-appointed Fed governor and member of the rate-setting federal open market committee.
Bill Pulte, a close ally of Trump and head of the Federal Housing Finance Agency, which regulates mortgages, alleged that Cook had committed mortgage fraud. She had purportedly listed two homes as her primary residence, which could get her a better mortgage rate.
Cook, whose term ends in 2038, has since sued the White House to keep her role. The case will ultimately be determined next year by the supreme court, which temporarily blocked Cook’s firing. Her lawyers are arguing the president needs demonstrated “cause” to fire a Fed governor, and that the fraud allegations come from “cherrypicking” facts.
‘No risk-free path’
While the highly anticipated decision on Trump’s bid to fire Cook will have sweeping consequences for his campaign to exert great control over the Fed, he is also preparing to pick Powell’s replacement as Fed chair.
Earlier this year, the supreme court mentioned the Fed in a separate ruling in which it allowed Trump to fire two members of independent labor boards. In its decision, the court said: “The Federal Reserve is a uniquely structured, quasi-private entity that follows in the distinct historical tradition of the First and Second Banks of the United States.”
The suggestion that the court views the Fed differently compared to other federal agencies provides some hope to allies of Cook, and proponents of central banking independence, but it’s still unclear how the justices will ultimately rule.
The White House is “testing the defense”, Sahm said. “Some have held up, and some may not. It’s a pressure campaign.”
But the Fed has so far remained stoic in face of the White House’s assaults, and Wall Street still appears confident in the Fed’s policymaking – essential for the stability of markets.
“The Fed is always a political lightning rod. When things go wrong, the Fed is the first to blame. They get a lot of criticism when things aren’t going right, and they don’t get a lot of praise when things are going well,” said Ryan Sweet, chief economist at Oxford Economics.
But Trump’s attacks have largely “fallen on deaf ears”, added Sweet. “I don’t think it had any influence on what they did this year with regards to monetary policy.”
In the fall, the Fed started to lower rates, but Powell made it clear that the move was taken out of caution for risks in the labor market. “There is no risk-free path” for the central bank, Powell said in both his October and December press conferences.
Could tension pick up in the new year?
Rates are now sitting at a range of 3.5% to 3.75%, almost 2% lower than two years ago. While the White House has softened against the Fed in recent months, tension could pick up in the new year.
Trump said that he wants to see interest rates go down to 1%, but new projections from Fed officials suggest that most of the 12 voting members in the rate-setting committee don’t expect much change to rates in the next year.
“We’ve now gotten to a place where the risks of [inflation and unemployment] are what we think are broadly, roughly, in balance,” Powell said in December.
Still adamant to lower rates, Trump has instead focused on picking a new chair to replace Powell, whose term as chair is up at the end of May 2026. In recent weeks, he appears to have narrowed his focus on “Two Kevins”: Kevin Warsh, a former Fed governor who Trump says “thinks you have to lower rates”, and Kevin Hassett, current director of the National Economic Council and a staunch Trump loyalist.
Trump said that the next Fed chair should listen to the president. “Typically, that’s not done any more,” he told the Wall Street Journal. “I don’t think he should do exactly what we say, but certainly we’re – I’m a smart voice and should be listened to.’
Economists are mixed on the impact a new Trump-appointed Fed chair can have on overall decision making. The chair is just one vote among 12 on rates, but a Fed chair is also the face of the Fed. Though other officials speak publicly, and frequently espouse their views on the economy, the chair has the biggest microphone, and sets the tone.
“If there are any cracks in the Fed’s independence, it will spread very, very quickly,” Sweet said. “That’s going to really affect market and inflation expectations … So it’s actually counter to what the White House wants.”
“Tonight, Canadians will gather with loved ones to celebrate the start of the New Year.
This is a time for families and friends across the country to come together and to welcome the year ahead. On New Year’s Eve, we pause to reflect on the moments over the year that brought us joy, and the people in our lives who made them special.
Although this year has brought more than its share of challenges to our country, we have also been reminded that we are fortunate to be part of one extraordinary, generous, and caring nation.
We are strongest when we are united, when we look out for each other, and when we take care of each other. That is what makes Canada strong.
As this year comes to a close, we resolve to carry that same spirit and those same values into 2026.
Announcer: Welcome to TD Cowen Insights, a space that brings leading thinkers together to share insights and ideas shaping the world around us. Join us as we converse with the top minds who are influencing our global sectors. Brendan Smith: Welcome back to another episode of Machine Medicine, AI and Healthcare, TD Cowen’s podcast series where we bring you the latest and most important takeaways from the state of AI and the healthcare sector today. I’m your host and TD Cowen Healthcare Analyst, Brendan Smith, and today I’m joined by Simulation Plus’s Chief Executive Officer, Shawn O’Connor. Shawn, it’s great to have you, and welcome. Shawn O’Connor: Hey, thanks for having me, Brendan. Brendan Smith: For anyone new to our podcast series, Machine Medicine aims to break down the use of artificial intelligence across healthcare into bite-sized digestible points one episode at a time. We’re looking to highlight the biggest misconceptions and then recontextualize each piece back into the bigger picture. Today, Shawn and I are exploring how advancements in artificial intelligence are changing the game for biosimulation players, how Simulations Plus itself is future-proofing their own platform, and really what the field of AI-powered biosimulation ultimately means for the industry. Shawn, let’s just dive right in. First, let’s maybe start with a quick point of distinction here. Broad strokes, what do we mean when we say biosimulation versus artificial intelligence, and really, why do next-gen biosimulation platforms, like the one you all developed at SLP, really lend themselves so well to the integration of AI? Shawn O’Connor: Yeah, thanks, Brendan. The misnomer there is biosimulation versus AI. AI is a tool that we’ve used in biosimulation since we began in the early ’90s. Obviously, AI wasn’t the buzzword it is today at that point in time, but early techniques in terms of machine learning are what formed the basis of our products as early as the mid-’90s. Obviously the tools have improved and advanced over the years, and as they’ve advanced our ability to take biosimulation approaches, in silico modeling of biology and chemical entities to be predictive across the continuum of drug development, those AI developments have improved our ability to be predictive and insightful to decision-making along the way. Brendan Smith: Mybe let’s double-click a bit more on SLP’s platform specifically, namely, your first launch within the sweeping AI rollout integration, a little bit more broadly, GastroPlus .2. Historically, what has been the real selling point for GastroPlus’s value-add to your customers, and how do you envision the integration of AI really taking this to the next level? Shawn O’Connor: We have several platforms that support discovery, preclinical, clinical development, and even have applications after market approval for a drug. The focus and benefit of these tools is really focused on that billion-dollar cost and 12-year timeframe to get a drug to market. How can we shorten those timelines? How can we reduce the cost of a drug getting to market? Each of our applications has different use cases. Biosimulation really has built itself over the years by the introduction of new use cases on an annual basis, new decision points, new simulations, new models that dive deeper into the biology, the science, and leverage AI tools. The real leverage point today is beyond simply the ChatGPT, the information gathering capabilities of LLMs, and now is evolving into the application of agentic AI into our workflows for biosimulation. What does that mean? What it means is that we can increase the efficiency of the scientists that’s applying these tools to develop models, biological models and drug models and accelerate their process. Agentic AI can step in and automate processes that typically would require an extended period of time on the part of the scientist before he got his teeth sunk deeply into the real issues. The mechanistic processes of pulling together models, the mechanical processes, I should say, can be automated and get the scientists much farther down the line, and that frees up his time to expand the application of biosimulation. It allows an organization that is invested in biosimulation to deploy these techniques across their drug programs, across the continuum of time more rapidly. Brendan Smith: Yeah. You’re getting at a lot of really important questions here, I think, in a lot of the general public’s attempt to understand where this is today, where it’s going. As you’re looking forward over the next, let’s say 12 to 18 months, as you all have talked about continuing to roll out and integrate more and more of these capabilities into the platform, When you look broadly across the drug development spectrum in the healthcare industry, where do you think are really the fastest or maybe nearest term and really tangible opportunities for some of these AI efficiencies that we’re seeing? The real answer is everywhere, right? But over the near term and how we think about prioritizing, whether you’re a researcher in an academic lab or even at some of the pharma customers that I know you all serve, where do you see as the lowest of these low-hanging fruit today? Shawn O’Connor: Yeah, I agree. The opportunities span the full continuum. One example of opportunity that is rich in an environment in which the drug candidates are becoming more complex, clinical trials are becoming more complex, therapies that are being developed when the batting average for successful Phase 3 clinical trials is as poor as it is, that complexity translates into cost, and so the cost of Phase 3 clinical trials is increasing. Challenges in terms of patient recruitment and other components of that call out for, how do we improve the batting average? How do we improve the success rate of those Phase 3 clinical trials if they are going to elevate in cost here? So biosimulation has been applied in protocol development and clinical trial design over time, and yet it is still untapped in terms of the efficiencies that can be impacted there in terms of selecting the right drugs to take into those clinical trials and then defining protocols that optimize the efficacy and toxicity trade-off and potential success for those trials. Brendan Smith: Yeah. I feel like a lot of these decisions that need to be made both on your side and frankly on the customer side, too, are in many ways tied to where the technology realistically is today. Every day we’re hearing more about the blurring of lines between big tech, like Nvidia, Google, OpenAI, and big pharma, frankly, and especially as some of the pharma guys really start to ramp up their own investments into this technology itself. On that front, one thing we’re asked a lot about is, does pharma’s increasing investment in it come as a tailwind to where you all sit, or is it actually an either/or on their side? Where do you fall in that conversation, and do you expect the integration of AI, or maybe I should say how do you expect the integration of AI into all of this biosim capabilities ultimately impacting their decision-making on investing internally versus outsourcing or licensing? Shawn O’Connor: Yeah, fair question. Investment is picking up in terms of whether it’s the third parties you reference or our clients themselves in AI. Where is that investment going? That investment spread out across a spectrum of efforts, but very focused today in terms of data management, the ability for a pharma company to get their arms around the availability of a wealth of information they have internally across many years of drug development, many drug programs, many drug candidates that can prove very valuable in terms of biosimulation. Getting that data into an accessible and curatable and usable format is a real tailwind in terms of biosimulation applications. Their investment is not an investment in replicating the tools that we offer and provide to them. Their investment is in terms of building an ecosystem that can fit into… Pharma companies are very standard-operating-procedures-oriented, process-oriented, and one of the big movements in our industry is the movement of our clients to building ecosystems internally that integrate both the biosimulation tools and other AI tools and making them integrated so that they can follow a drug candidate from discovery through to approval and beyond, and those tools are easily accessible to the wealth of their participants in the program through that 10-year cycle timeframe. So these investments are very positive, both in terms of building our products into their standard operating procedures, making more and better data available as input into our tools, which leads to more accurate predictive simulations to support their decision-making process. Brendan Smith: Yeah, and I think your point about replicating what’s already been done by external sources, by, frankly, they’re already licensed partners, right? It’s such an important point that sometimes I do feel like people miss when they’re trying to dig into this because it’s just like AI, that those letters now carry so much weight in some context and then so little in the way that it’s often applied. So I feel like the way that pharma is spending their dollars, where that’s actually going internally versus their license, I think it’s important for people to remember it’s not necessarily like they’re not going to do something they don’t have to do, especially if it’s already readily available, commercially available, already a license that they can tap into. In the same vein of the conversation, I think I would be remiss if I didn’t bring up the latest FDA guidance that came out last week, and we’ve heard a lot about this over the course of 2025 altogether, really in line with the agency’s efforts to start phasing out animal testing in lieu of some of these, what they’re calling, new approach methodologies or NAMS. How has that recent guidance update, and in particular the one on the non-human primates last week, messaging out of FDA all year, really impacted either your business or your conversations with customers to date? What’s been the industry’s response over the last six to eight months? Shawn O’Connor: Yeah, no, I’ll step back a little higher up and just say that it is a great example of what has been building momentum in biosimulation, a great example of regulatory support of further use of in-silico methods. It’s what’s driven the business over its 30 years of existence and will continue into the future. The NAM announcement with regard to animal testing is also a part of a number of FDA announcements of support of the use of AI tools broadly and specifically here in terms of animal testing, it’s a raising of the bar. What I mean by that is that biosimulation has been used to date in the animal testing area focused in terms of predicting first in human results, characteristics of a drug, and that information used in the design of animal testing steps to reduce species populations, ensuring that the protocol for the animal test provides the appropriate output to take to the next level. The guidelines here raise the bar rather than just simply minimizing or improving animal testing, can we eliminate the animal test, and is certainly a raising of the bar, an ambitious goal that I think will have some results. At the same time, we need to recognize these things don’t change overnight. I was positively impressed with the announcement in April, getting the first iteration of the guidelines out by here in December That’s pretty good timeframe in terms of the way things work in the regulatory world, so it’s a real ambitious milestone that we’re working towards. Clients are watchful and thoughtful, depending on where their programs are in the development phases. This doesn’t mean things are going to change overnight, and in the next six months all animal testing is going to be taken away. Ultimately it’s probably going to be a combination of certain therapeutic areas where the weight of evidence is sufficient to significantly impact animal testing, and in some therapies, that may not be the case. So this one will be one that builds over time. It’s a new use case for biosimulation, a raising of the bar there that will create opportunity for us in the future. It’s not necessarily a hockey stick that’s going to occur in a short window of time. That’s the nature of this industry. Brendan Smith: Yeah, and I think that’s such an important point, too. These are the conversations we’ve been having so much over the course of this year, right? You don’t flip a switch and all of a sudden mouse testing and NHP testing is just gone, right? It’s a conversation about, if I’m used to spending X number of dollars and taking X number of years to get the amount and quality of data I need to bring a drug into Phase 1, and you can maybe spend 5, 10% less of that towards animals and use some of that last 5 to 10% to leverage some of these computational approaches and save some time, save some money upfront. It’s really a conversation of maybe that’s 90-55, maybe that’s 80-20, and it just over time shifts, right? It’s not an overnight sensation by any means. But on that point, what do you think for some of the folks, irrespective of stage of development, but whether they’re individual researchers or entire companies who are maybe dragging their feet a little bit on this, maybe they’re entrenched in their animal studies or what have you, what do you think the industry really needs to see to drive a more concerted or tangible shift or monetizable shift really towards full-scale broad-based adoption of these NAMs? Is it just a matter of time? Do they need to start playing with it and see the money savings? What is that lever, you think? Shawn O’Connor: Yeah, it’s a little bit of time, evolution, generational change. Science requires a lot of debate and evidence before the momentum changes. There’s also the opportunity for, hey, when you develop better use cases, or I say examples of success, that have been driven by biosimulation, then that spurs on adoption. These situations have occurred in multiple scenarios. Bioequivalence waivers for formulation changes is an example that in our history where it took a while for people to gain acceptance there, but once it did, it was a flood of, “Hey, we no longer have to do clinical trials to make a formulation change in a drug post approval,” and that created momentum in biosimulation. Ultimately, the drug sponsor who can point and say, “Along the way from discovery, I wouldn’t have identified the molecular structure if it had not been for AI approaches through to preclinical progress and elimination of an animal test, through into later stage clinical development, and here’s what biosimulation did for us,” and then finally the approval of a drug, and that weight of evidence that biosimulation is really what drove it through the process, it didn’t take 10 to 12 years, it didn’t take 1 to $2 billion, the more examples that we have of that nature, the more momentum and speed of adoption will follow. Brendan Smith: Yeah. Again, I think it’s such an important point for people to start to wrap their head around. Maybe in that frame of mind, from where you stand in your conversations today, what do you think maybe the investment community in particular either underappreciates or misunderstands altogether, really, about integration of AI with biosimulation and its utility that you think is especially important for any investor to know heading into next year? Shawn O’Connor: We’ve touched on a couple of those points. A, biosimulation versus AI, it’s not an either/or. Both are the same effort going forward. Secondly, the change dynamic in an industry that drug development is a scientific-based, regulated process, and the change dynamic is not overnight there. So in terms of an investor’s view, this is not a hockey-stick environment that’s just, can we time it right and tomorrow it’s going to take off? This is one that’s going to build slow over time. That slow, it’s good growth, and the important factor there is that the runway is quite long here for long-term growth of biosimulation in the drug development world that supports a company like ours. Brendan Smith: I know we’ve covered some great ground today and it’s a conversation I’m sure you and I will come back to for months and years to come, but before I let you go, one thing I like to ask all of our guests here is, if everything we’ve discussed today goes over someone’s head, but they’ve made it this far and they’re still listening at this point, what is maybe one point you would really want everyone listening and irrespective of what they do every day, irrespective of their background, to make sure that they really remember and take away from our conversation? Shawn O’Connor: I’ll point to, it’s wonderful to wake up every morning and be working hard to bring therapies to the market more quickly that help patients. So in terms of inspiration for myself, my company, and the inspiration for the use of biosimulation here, it’s got a fantastic goal out there that we work towards. Brendan Smith: Well, that’s great, and I think with that, I want to thank you for hopping on, Shawn, and talking us through what really is the cutting edge of this marriage between software and healthcare technology innovation. I’m sure we’ll have plenty more to discuss over the weeks and months ahead as it pertains to all of this, so really thank you so much for joining. Shawn O’Connor: Hey, thanks for having me. Appreciate it. Announcer: Thanks for joining us. Stay tuned for the next episode of TD Cowen Insights.
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
A law mandating new requirements for job postings by companies of over 25 people will go into effect starting Jan. 1, 2026 in Ontario.
In the new year, companies have to disclose in publicly advertised job postings whether AI will be used when selecting a candidate and if the position advertised is vacant, according to incoming changes to the Ontario Employment Standards Act.
Postings will need to list the salary range for the position with a maximum difference of $50,000 and mention other forms of compensation, including commission and bonuses.
The law will also require companies to follow up with applicants within 45 days of their last interview to let them know whether they were successful.
Andrea Little, a freelance digital user-experience designer from Kitchener, has been searching for a full-time position since her previous employer did some restructuring this fall.
She said many companies don’t provide details about the role or pay in their job postings. Little said she thinks transparency requirements will be helpful for applicants.
“Having that information up front, it helps me better gauge whether that role’s a fit and if it’s worth pursuing,” she said.
But there’s debate among experts about whether the transparency laws will change much for job seekers.
The new laws also require employers to update applicants on the status of their application within 45 days of their last interview, according to the act. Andrea Little said not hearing back after several interviews can be discouraging. (Craig Chivers/CBC)
Just because the salaries are posted as a certain range in the listing, does not mean companies are required to pay that amount, said Travis O’Rourke, president of recruitment firm Hays Canada.
O’Rourke said companies could post a job with a high rank and salary, but by the time the interviews happen, the company could decide they could get by filling a lower level position and adjust the pay and title accordingly.
“This legislation allows you to completely do that,” he said.
AI and the application game
Employers across the country are increasingly using AI while selecting candidates. Little said applicants don’t know who, or what, will be receiving their application.
“You can submit something online and a human will never see it,” said Little.
“If a job posting is written in present tense and your resume typically is written in past tense, it won’t see it as a match.”
Eventually, if the application makes it past the AI stage of screening, a real person will read it. So applicants are having to walk the line of showing character and appealing to algorithms, Little said.
WATCH | Some companies using AI bots to conduct job interviews:
Your next job interview could be with an AI bot
Companies are using AI hiring bots to screen, shortlist and talk to job candidates. Advocates say the technology frees up human workers from tedious tasks, but some applicants say it adds confusion to the process, and there are concerns about HR job losses.
O’Rourke said the incoming legislation won’t provide much help, because there’s a large umbrella of what could be considered AI and it doesn’t require employers to be specific about how it is used.
Self-advocacy, less ghosting potential benefits
But posted salary ranges could make a difference for current employees of the companies that are hiring, according to Margaret Yap, a human resources management professor at Toronto Metropolitan University.
If a current employee sees a posting for another role in their position with a higher minimum salary, they can advocate for themselves to receive the same pay.
“Management will have to deal with all of these questions from their current employees,” Yap said.
Posted salaries could help improve workplace equity because it allows employees to better understand how their work is valued, said Allison Venditti, founder of Toronto-based pay advocacy group Moms At Work.
She said it could start open conversations about compensation and potential disparities.
“There was a silencing [culture] in place to get people not to talk about it,” she said. “And I’m saying, if you don’t want people to talk about it, you clearly know you’re doing something wrong.”
A common thread among everyone CBC News spoke to is that requiring companies to follow-up with applicants is a good thing.
“With interview stages typically having an average of five interviews, to make it to round four and then not hear anything is — it’s hard,” said Little. “And the more it happens, the more devastating it becomes.”
Venditti said it’s also “really bad HR.”
“The fact that we have to legislate that is really sad,” she said.
WASHINGTON – In compliance with a court ordered deadline, U.S. Environmental Protection Agency (EPA) completed a robust review of 1,3-butadiene using gold standard science. We used the best research, data, and testing available, along with input from the public and independent expert peer reviewers to complete this thorough evaluation. Our comprehensive scientific review found potential unreasonable health risks for workers who breathe in this chemical at their jobs in 11 specific industrial settings. Use of personal protective equipment, which is often used in industrial workplaces will help mitigate these risks. EPA did not find unreasonable risks to the environment, for consumers and to the general population including people living near facilities.
As required by law under the Toxic Substances Control Act (TSCA), EPA will now develop rules to protect workers from the risks we identified. This process will include meticulous consideration of health effects, exposure levels, economic impacts, and benefits of use, with extensive stakeholder engagement to ensure the resulting rules are both protective and practical.
EPA is committed to radical transparency throughout the review and risk management process. The review process for butadiene has taken six years, with approximately 20,000 scientific studies considered during the review process for 30 different use cases.
We improved our evaluation by incorporating real-world data and refining some conservative assumptions from our first draft, making our science more accurate and reliable. For example, EPA switched to a more detailed database (NEI) that includes specific details like how tall stacks are, what angle they release emissions at, and emission temperatures, which are not reported in TRI. This allowed our evaluation to move away from defaults to more accurate, facility-specific conditions. The NEI database also provides exact coordinates of where emissions are actually released, rather than just general facility locations. This geographic precision gives a more accurate picture of actual exposure risks.
EPA also took into account additional feedback from peer reviewers recommending that we add together the risks from bladder cancer and leukemia. This resulted in a higher overall cancer risk estimate used in the risk evaluation.
Regarding workplace exposures, which is where the highest risks occur, we followed robust scientific practices during our 1,3-butadiene evaluation to provide clear, reliable results. The final rules will give companies clear regulatory certainty while providing workers with necessary protections. Our safeguards will be tough and practical. We will ensure the protections we put in place are workable, taking additional action if new science emerges or conditions change.
In addition to evaluating workplace exposures, we also thoroughly analyzed risk to the environment, to consumers, and to the general population. We are pleased to report that EPA did not find unreasonable risks to the environment, or for consumers or the general population, including people living near facilities.
Background
1,3-butadiene is a colorless gas essential for manufacturing products Americans use every day, including car tires, adhesives and sealants, paints and coatings, and automotive care products. Consumer products only contain tiny, safe amounts less than 0.001 percent. Unreasonable risks are found in industrial settings where workers could be exposed to much higher levels that could lead to health risks which may include reduced birthweight pregnancies, anemia, leukemia, and bladder cancer.
Pranab Biswas has joined McGuireWoods as the firm’s chief financial officer, bringing more than three decades of executive experience at professional services and technology companies. He is based in the firm’s Tysons, Virginia, office.
Biswas comes to McGuireWoods from Guidehouse Inc., a global advisory, technology and managed services firm where he served as a finance partner. He previously spent more than a decade at FTI Consulting as vice president of global financial planning and analysis, partnering with firm leadership to drive enterprise performance and strategic investment. During his tenure, FTI’s revenues expanded from approximately $1.4 billion to about $3.7 billion.
Biswas also served in senior finance roles at CA Technologies, including vice president of finance and assistant treasurer with responsibility for capital structure, allocations and global cash management. He held vice president, finance positions at Teleperformance SE and Convergys Corp. and began his career in strategic finance roles at the Canadian Imperial Bank of Commerce.
“Pranab’s wealth of experience in financial management and proven leadership at growing global professional services firms make him an ideal choice to lead the finance group and help achieve our strategic objectives,” said Jeffrey Connor, McGuireWoods’ chief operating officer.
Biswas added, “McGuireWoods is an innovative law firm with a collaborative culture and a well-deserved reputation for excellence. I look forward to working with the leadership team to ensure we are positioned for continued growth and financial strength.”
Biswas earned his MBA from the University of Toronto and a bachelor’s degree from the Indian Institute of Technology, Banaras Hindu University.