Incognito click-clacking on keyboard not a shield
AS social media continue to dominate as the platforms on which people express themselves, and with artificial intelligence (AI) making it easier to generate and spread potentially defamatory content, attorney-at-law Stephanie Ewbank is urging the public to exercise caution.
Ewbank, who is also a partner at the law firm Myers, Fletcher and Gordon, warned that hiding behind a screen does not shield individuals from legal consequences, especially when defamatory remarks are made.
She noted that the use of the word “allegedly”, though commonly thought to protect one from a lawsuit if it is included when disseminating potentially libellous information, is not a solid defence in defamatory matters.
The attorney said there is a certain level of responsibility that comes with someone putting themselves out there as a commentator on social media or someone known for sharing their opinions.
“I think perhaps the most important thing, in that context, is to ensure that when you are conveying a view or an opinion, or you’re even repeating something that another platform has indicated in a different context, that you are fact-checking.
“Fact-checking is very important; looking to see, all right, is there a basis for this? Am I repeating something that is not grounded in some level of truth, based on the information that is out there? How am I wording what I’m saying? Am I just repeating it as if it’s a fact, or am I essentially just indicating that it was reported in the news that so and so and so happened?” said Ewbank.
EWBANK…it’s just a matter of being very cautious and responsible in terms of the views and opinions that are expressed
She said, too, that it is important for individuals to consider what they are conveying, based on the overall context in which something is being said.
“Defamation is not just about construing the statement itself that you’re saying is defamatory. One can also use the context in which it was said to essentially interpret the words and the meaning of the words of the alleged defamatory statement. It could be the overall impact of the entire reel — if we’re using a reel for
Instagram — the impact of what was said on the whole reel, and if it gave a certain impression that was defamatory,” the attorney explained.
“It’s not even just this sentence or that sentence, it’s also the overall meaning of the words, and I think it’s just a matter of being…really very cautious and responsible in terms of the matters, and the views, and the opinions that are expressed,” she told the Jamaica Observer in an interview last week.
Ewbank further noted that the use of the word allegedly has become a common mechanism when reporting on certain matters, but warned that the word does not automatically protect the individuals who publish the statement from a defamation lawsuit.
“It really depends as well on the nature of the statement that is being reported. So, for example, if the underlying statement is defamatory and the publisher had no reasonable basis for repeating it or sharing it, that publisher can still possibly be exposed to defamation proceedings,” she said.
“The court is going to look at the substance and the context of what was said, and then, of course — as I mentioned earlier — if it is in the context of a newspaper or some other kind of blog or social commentary online, there will also be a consideration of the criteria of responsible journalism. And so, it’s not that just attaching the word allegedly, or reportedly, into the sentence will all of a sudden solve or cure any potential exposure,” she told the Observer.
Ewbank noted that while saying “allegedly” is better than not saying it, it cannot be the only thing an individual uses to publish information that is possibly defamatory.
She noted that there is a defence of innocent dissemination which a person can use to state they did not know a statement was defamatory. However, the individual would have to show that they had no reason to know that the information was defamatory.
Other defences include justifying a statement to be true, stating that the remarks were a fair comment, or using absolute privilege.
With the growing artificial intelligence (AI) landscape, Ewbank said it is also possible for AI-generated content that portrays an individual in a defamatory manner to attract a defamation lawsuit.
“The person who is responsible for creating, publishing and disseminating that content could be sued for that. Or similarly, if you create an AI video that creates a false news report, or what seems to be a news report, or what seems to be an incident happening, and that incident didn’t really happen, and giving that impression would tend to lower the reputation of the person…in terms of what is portrayed in the video, there could be ways of arguing that, that content is defamatory and attracts the liability in terms of defamation,” she explained.
Ewbank noted that AI is relatively new and the law is still evolving in that area, but the principles of defamation can still be applied.
“In the same way, for example, if someone were to go online and — you know a person sometimes will create anonymous accounts or create fake accounts and write things from that account — if you’re writing defamatory statements, there are ways to discover or to get court orders to determine the identity of the person who’s behind those fake accounts,” said Ewbank.
“Creating that shield of anonymity does not protect you from the law of defamation,” she warned.