Copyright
April 28, 2020
A watermark to arm publishers against fake news
⇥ QUICK READ
<div class="hs-cta-embed hs-cta-simple-placeholder hs-cta-embed-169954082563"
style="max-width:100%; max-height:100%; width:170px;height:380px" data-hubspot-wrapper-cta-id="169954082563">
<a href="https://cta-service-cms2.hubspot.com/web-interactives/public/v1/track/redirect?encryptedPayload=AVxigLIoZvYw142jLWhlZcLAAaZYJzZoVTHjMeTERdq%2F4duOhzt8Znj8lljItMKVQuRAWr2HoidVRHST5IYV9z8ExtIKS8WEpWZak7z1jpk%2BDx6MaxHhFeTeJW7UT3TNAPH45oOrw4gDocdRYdN5KkZ20oTlLb44D7I0RlNM78V%2F6yNMgfIxwNbHGjGH0m7TuRqn&webInteractiveContentId=169954082563&portalId=4144962" target="_blank" rel="noopener" crossorigin="anonymous">
<img alt="See The Unseen! Book a 15-Minute Demo to Discover IMATAG's Invisible Watermarks. " loading="lazy" src="https://no-cache.hubspot.com/cta/default/4144962/interactive-169954082563.png" style="height: 100%; width: 100%; object-fit: fill"
onerror="this.style.display='none'" />
</a>
</div>
Share this article
Subscribe to the blog!
I subscribeThe Covid19 lockdown emphasizes - if need be - how much we depend on the Internet. Every day we find ourselves in a loop where the real world meets the virtual world. And as always, truth is the basis of our freedom. However, it is assaulted every year by millions of fake news on the Internet, which today controls our lives.
Yet a solution exists against fake news
Yet a solution exists against fake news. It was invented and patented by SAS Lamark, created in May 2015 under the trade name Imatag. It develops technologies for the identification of image and video content. Imatag’s technologies are mainly used on behalf of rights holders who wish to know the uses made of their content on the Internet. Imatag has patents and software in the areas of watermarking, CBIR and CBVR. Imatag has expertise in the acquisition and analysis of multimedia data on the Internet and particularly on social networks.
Disinformation, misinformation and malinformation
All kinds of misleading content swarm on the web: propaganda, lies, conspiracies, rumors, hoaxes, hyper-partisan content, memes, videos, and manipulated media.
Claire Wardle, a researcher and one of the most respected specialists on the topic distinguishes three categories : disinformation, content intentionally designed to harm, misinformation, wrong information shared without intent to harm and malinformation, a lie based on a core of truth.
By delving a little deeper, she established subcategories.
- Satire, more dangerous than it seems because the more it is shared, the more internet users lose contact with the original poster. They find it difficult to identify its parodic tone.
- False connection, using sensational headlines to attract clicks.
- Misleading content or the partial use of information, fragments of statistics. The most common altered context in which authentic photos or videos have misleadingly captions.
- Impostor content, which uses the logo of a famous media or institution.
- Manipulated content, which is genuine content of which one aspect has been modified (photomontage or video edited or modified).
- Finally, the fabricated content, totally false and designed to deceive deliberately, is the realm of deepfake.
How to warrant the veracity of information, the authenticity of the source and of the site which presents it ?
Internet users spend several hours each day on social networks viewing information content. But a lot of these contents are posted by users who are not sources of confidence. A certain proportion of these contents actually turns out to be manipulated or diverted from its original context. Internet users cannot verify the source of all of the content they view on a daily basis. The responsibility for integrating the tools enabling the traceability and verification of the information produced and shared therefore falls on the platforms, in collaboration with publishers and press agencies. Imatag is positioning itself as a technology supplier for this ecosystem.
Many fake news extracts a particularly spectacular or shocking video or photograph from its original context and uses it to illustrate another information, false or true.
This shared photo is a textbook example of a wrong context, appearing on social networks in mid-March 2020 when Italy recorded 475 Covid19 deaths in a single day.
This image dates back to October 5, 2013, seven years earlier, in Lampedusa, (below) after an immigrant boat capsized in the Mediterranean, causing over a hundred of them to drown. (AFP photo found by IMATAG).
The date on which the photograph or video of the suspect post was taken is the first item to be verified. If it is before or after the information it completes, the doubt is legitimate. However, there are many obstacles in using metadata to confirm the authenticity of a photograph. What means do we have ?
There are many obstacles in using metadata to confirm the authenticity of a photograph
The first is to go to Google Images, drag the suspicious photo to the search engine, and see what comes out.
What this search engine does then is a search by "similarity." It looks for images similar to the one that was uploaded, using color, shape, and content patterns. The result is several seemingly close looking images, including, quite often, the same one.
But there will be no indication of the date of the photo because chronology is not at the center of the search engines. In the vast majority of cases, the internet user will not find the photos' creation date and location. Nor will he find the author or the photo agency who initially published this photo. Search engines do not display metadata of images, fields where author, source, date, and description are entered.
News sites, as well as social media platforms, delete metadata under the erroneous belief that it will make their website faster to load.
If the name of the photo agency amazingly remains, the Internet user will then have to search for the image on the agency's website. Perhaps then will he eventually identify it and be able to corroborate the caption and the date of the original photograph with the one found online.
This manual search for identification, tedious for an individual confronted with thousands of fake news a year, is hugely time-consuming. It proves to be potentially cost-prohibitive for media companies. This is all the more true for social networks who spend millions to manually verify the source and veracity of the information published by users.
How can fact-checking services automate the detection of fake news ?
Detecting fake news is, therefore, not easy. News sites - whose credibility is more than ever the only guarantee of survival - today offer fact-checking services that flush out the most popular fake news.
But how can we automate the detection of millions of fake news among the 4 billion photographs posted daily, knowing that only 3% of them still have a (meta) copyright data ?
Tagging or watermarking visual content turns out to be the first step towards a reliable answer for news producers. It is the essential technical step for any automated identification of content. To be successful, the watermark must be invisible and traceable despite all the transformations and alterations to which photographs are generally subjected to when turned into fake news.
This very particular invisible and indelible watermark technology exists. It was invented and patented by Imatag, a french startup, it has been scaled up to adapt to the needs of press agencies like the AFP. Via a pixel analysis, it can also identify areas where pixels have been manipulated and thus automatically report that an image has been altered. A critical multi tool against fake news.
Want to "see" an Invisible Watermark?
Learn more about IMATAG's solution to insert invisible watermarks in your visual content.
Book a demo