Ireland’s Workplace Relations Commission issues warning against AI misuse

Maureen Daly of Pinsent Masons was commenting following a recent discrimination case before the Workplace Relations Commission (WRC) in Ireland.

The dispute arose after a flight attendant launched a discrimination claim on the grounds of race and family status against his former employer, Ryanair. He also alleged victimisation, harassment, sexual harassment and procedural unfairness in Ryanair’s disciplinary procedure.

However, the adjudication officer said the claimant, who did not have legal representation, failed to provide “cogent evidence” to support his allegations and rejected the claims. Moreover, in her decision, she criticised the flight attendant’s suspected use of AI in preparing his submissions, stating they were “rife with citations that were not relevant, mis-quoted and in many instances, non-existent”, thereby wasting a considerable amount of time – both of the adjudication officer and the other party – in trying to establish the veracity of the legal citations in his submission.

Although initially the flight attendant appeared to deny using AI to prepare his submission, the decision noted that on the second day of the hearing he acknowledged that he may have used AI and “became defensive” about its use.

The adjudication officer said that his attempts “to rely on phantom citations to support his claims can only be described as egregious and an abuse of process”. She warned that parties making submissions to the WRC, Ireland’s main forum for litigating employment disputes, “have an obligation to ensure that their submissions are relevant and accurate and do not set out to mislead either the other party or the Adjudication Officer”.

Daly, an intellectual property expert at Pinsent Masons, said the case served as a wake-up call for litigants – particularly those representing themselves – and lawyers to take due care when using AI in legal submissions. “AI can serve as a valuable resource in the preparation of legal submissions, offering efficiencies in drafting and research,” she said. “However, it is essential that any AI-generated material is subject to thorough human review to ensure its accuracy, legal soundness and contextual appropriateness. Failure to do so may result in the inclusion of incorrect, misleading or non-compliant content, which could undermine the credibility of the submissions, breach professional obligations or expose the party to legal risk.”

In a direct response to the case, on 30 October, the WRC published new guidance on the use of AI tools to prepare material for submission, reminding parties of the need to “take full responsibility for the content”, that any incorrect or misleading information “may negatively affect [their] case” and that they “may be asked to explain [their] submission or provide clarification”.

Continue Reading