Combating deepfake porn under civil law

One Dutch celebrity after another recently announced that they had fallen victim to deep fake porn videos. These are AI-edited videos in which victims ostensibly feature, while in reality this is not the case. This has major consequences for them. Recently, several victims announced that they will collectively file criminal charges. A criminal charge can lead to the conviction of creators (and sometimes those who publish the video on their own, i.e. publicists) of deepfake porn videos. Unfortunately, a criminal conviction does not undo the online publication of the videos. Like an oil slick, images continue to spread online and the disclosers are often based abroad. Publishing the footage is not always punishable abroad. In this blog, we look at what civil law remedies are available to victims to fight online publications of the footage.

Deepfake porn?

AI software makes it possible to fit one person’s face over another person’s face, as it were, in existing moving image material. The algorithm does that all by itself so there is no manual editing involved. So it is child’s play. The face edited into the footage even adopts the facial expressions of the original face. This now looks lifelike and has major implications for victims.

Criminal route

Several Dutch celebrities announced recently that they will file criminal charges. Those criminal charges are likely to focus mainly on the makers of the videos. Last year, for the first time in the Netherlands, a creator of deepfake porn was sentenced to 180 hours of community service. That conviction was based on Article 139h of the Penal Code, which criminalises so-called “revenge porn”. This centres on the manufacture of an image of a person of a sexual nature. Although the deepfake videos did not contain a real image of a person, the Amsterdam court ruled that deepfake sexual imagery could also qualify as an image of a sexual nature. This paved the way for the criminal conviction of creators of deepfake porn.

Civil law bases

The problem with deepfake porn videos is that their creator will be unknown in most cases. Of course, for a criminal conviction, the creator must first be tracked down. Moreover, a criminal prosecution does not always lead to a conviction.

Although publicists could also be criminally prosecuted under Dutch law, to my knowledge this has not yet happened in practice. In addition, the criminal prohibition of revenge porn is not regulated by law in every country. So the criminal route does not always offer a solution for victims.

Making and publishing the footage is not only criminally culpable. Publishing such visual material can also be unlawful in a civil law sense. This therefore gives victims another option besides criminal law to act against the publication of deepfake porn videos.

Portrait rights

The first way to tackle such videos civilly lies in portrait law, which is part of the Copyright Act. Portrait law allows a person portrayed to oppose the publication of their portrait under circumstances. The Copyright Act distinguishes between commissioned and non-commissioned portraits. In the case of deep-fake porn videos, these are (obviously) not commissioned portraits. The publication of such a portrait is not allowed if the person portrayed has a reasonable interest against it. This therefore involves a weighing up of interests.

A reasonable interest against disclosure may consist first of all of a commercial interest. For example, BN celebrities may oppose the disclosure of their portrait if they have a so-called redeemable popularity. The rationale behind this is that others should not profit from someone’s accumulated reputation without permission.

A so-called moral interest is also considered a reasonable interest . This includes, for example, respect for privacy. The Dutch Supreme Court ruled in 1997 that privacy is particularly violated when a publication places a person portrayed in “a public sphere of eroticism and freedom of opinion“. At issue in that case was an advertisement about nude recreation in which someone was portrayed naked unsolicited. Of course, compared to today’s deep-fake porn videos, that is not a big deal. We may assume that this premise certainly applies to such videos.

The privacy interest will still have to be weighed against the right to freedom of expression. Given the serious intrusion of deepfake porn on the privacy of the victims, there is little reason to doubt that the balancing of interests would not turn out in their favour. The victims of deepfake porn videos therefore have a good basis on portrait rights to oppose publications of deepfake porn videos.

GDPR

Besides portrait rights, the General Data Protection Regulation (GDPR) also provides a basis to stand up against deep-fake pornography, as the image of someone’s face qualifies as personal data. Processing personal data requires a valid processing basis. One such basis is consent. It may be clear that this is lacking in this case. Another processing basis may be the legitimate interest of the controller. In that context, a balancing of interests will also be made.

If there is no valid basis for processing personal data, then under the GDPR the data subject has, among other things, the right to erasure of his data – in this case, to erasure of the footage.

Which party to address?

So there are at least two civil law bases besides criminal law to stand up against deepfake porn, but how can victims use these as a basis to ensure that the publications are actually taken offline?

The first step will be to summon the disclosers of deepfake porn to take the publication offline. If it is not possible to identify the discloser, it is possible to write to the host, the party technically responsible for the website on which the footage is published. Does a company or hosting company not voluntarily comply with the summons? Then the court can be asked to impose a publication ban.

There is also the possibility of suing search engines for references to websites hosting the visual material. Indeed, those references may be unlawful in themselves. Moreover, under the GDPR, references in search results from search engine operators constitute their own processing of personal data. Finally, there is the possibility of engaging the Data Protection Authority. Among other things, it can proceed to warn parties or mediate with parties that do not comply with the GDPR.

European Directive

Incidentally, the European Commission is currently working on a proposal for a Directive to combat violence against women. The proposal also regulates something with regard to deep-fake pornography, and for the situation where it involves material withclear similarities to existing persons. Recital 19 reads:

19) […] The unauthorised production or manipulation, for example by image manipulation, of material giving the impression that another person is engaged in sexual activities should also fall within the definition of this offence, to the extent that the material is subsequently made accessible to a large number of end-users by means of information and communication technology without the consent of the person concerned. This includes so-called ” deepfakes ” – material bearing clear similarities to existing persons, objects, places or other entities or events, depicting sexual activities of another person and which would be falsely perceived by others as authentic or truthful. To effectively protect victims of such behaviour, threats of such behaviour should also be included in the definition.

A victim yourself?

In addition to criminal law, there are therefore other options for taking action against publications of deepfake porn. Have you become a victim of deepfake porn yourself? We are happy to help you fight online publications.

Lidl’s letter crate infringes on HEMA’s letter crate after all

HEMA launched a foldable storage crate on which people can attach their own letters in 2019. The letters for the crate are sold separately. The letter crate with the letters to attach yourself to it looks like this:

Lidl was apparently very keen on this product and launched a similar product three years later:

This was a thorn in HEMA’s side. It believed that Lidl was infringing its (copyright) rights with its crate and claimed before the interim relief judge of the Central Netherlands District Court in 2022 that Lidl would cease its infringements. It also claimed, among other things, that Lidl was ordered to carry out a recall and that the recalled crates would be destroyed.

Own intellectual creation

The court first examined the question whether HEMA’s crate is subject to copyright. To be eligible for copyright protection, a work must be unique and original and bear the creator’s personal stamp . Or in the simpler words of the ECJ: there must be an intellectual creation of its own.

If there is a personal intellectual creation, but that creation consists exclusively of elements necessary to obtain a certain technical effect, copyright protection is still excluded. Nevertheless, even within certain technically inspired design choices, sufficient freedom of choice sometimes remains. Crates have in common that they consist of at least four walls, a bottom and two handles. As a designer, you cannot escape these elements when designing a crate. They provide a usable crate that allows you to lift other items more easily. How exactly these (basically) technically necessary elements are designed can vary, though. HEMA argued that enough free design choices were made in the design of its folding crate. For example, the deliberately chosen horizontal lines would provide a “transparent” appearance.

Design heritage

The interim relief judge of the Central Netherlands District Court agreed. In the design of the folding crate, sufficient design choices would have been made and therefore, according to the interim relief judge, the crate has its own original character and bears the personal stamp of its maker. In short, the interim relief judge ruled that the design of HEMA’s letter crate is protected by copyright. Finally, the preliminary relief judge did note that the crate contains many elements that belong to the “design heritage”. In short, design heritage consists of everything that has been designed before. The copyright protection is therefore limited, according to the interim relief judge.

Total impressions criterion

HEMA’s crate was therefore copyrighted, according to the preliminary relief judge – albeit only to a limited extent. But does Lidl’s crate infringe HEMA’s copyright? The court in preliminary relief proceedings assessed this on the basis of the question whether sufficiently copyrighted features have been copied from a work. In case of utilitarian objects, the so-called total impressions criterion is used to assess this. As the word suggests, the total impressions that two utilitarian objects leave on the public are compared. If it is judged that these impressions are too similar, copyright infringement is established. In that assessment, of course, the (unprotected) elements that are part of the design heritage discussed earlier must again be taken into account. The preliminary relief judge ruled that the overall impressions (as far as protected elements were concerned) of the crates were sufficiently different from each other. No infringement, therefore.

Slavish imitation

Was that the end of the matter? No. For HEMA did not only argue that Lidl infringed its copyright. It also claimed that HEMA was guilty of slavish imitation. What the hell is that, I hear you think. As a general rule, when a material product is no longer protected by an absolute intellectual property right, imitation is in principle free. It is different, however, if this causes unnecessary confusion. An imitating competitor can be expected – even if the design of a material product is no longer protected by copyright – to do everything that can reasonably be expected of him to avoid causing confusion. However, a condition for relying on slavish imitation is that a product has a distinctive face in the market. According to the preliminary relief judge, this was the case with the HEMA crate. According to the preliminary relief judge, however, Lidl sufficiently distanced itself from HEMA’s crate – when one thinks away the elements that are part of its design heritage. There was therefore no slavish imitation either, according to the interim relief judge.

Appeal

HEMA disagreed with the ruling of the interim relief judge of the Central Netherlands District Court and went to the Arnhem-Leeuwarden Court of Appeal. The court ruled on 26 September 2023 and followed Lidl in its defence. According to the court, the design of most elements of the crate was mainly determined by technical and functional considerations. Any originality is therefore lacking, according to the court of appeal. Contrary to the court in preliminary relief proceedings, the court of appeal therefore ruled that the design of HEMA’s crate was not protected by copyright.

Bales for the HEMA you might think, but the court of appeal thinks differently about the reliance on slavish imitation. According to the Court of Appeal, Lidl’s letter crate is a copy of the Hema letter crate in almost all respects except the bottom. There may be differences, but according to the court, these are so marginal that they will not be noticed by the unobservant consumer of this type of product. Lidl had also copied the picture used by HEMA on the packaging and, in addition, the letter set supplied by Lidl was almost identical to HEMA’s letter set. According to the court, this was all unnecessary and creates a risk of confusion. In short, success for HEMA on appeal.

Conclusion

This case illustrates it nicely: if a product is no longer protected by an absolute right of intellectual property, that is not a licence to completely imitate that product. One will still have to do everything that can reasonably be expected to avoid confusion.

Register a trademark? You can currently do so very cheaply!

Do you have a small or medium-sized business and have you been planning to register one or more trademarks for your goods and/or services for some time? Then now is the time to apply for those trademarks. This is because you can currently apply for your trademarks at a discount of as much as 75%. Both at the Benelux Office for Intellectual Property(BOIP) and at the European Union Intellectual Property Office(EUIPO). You get the discount when you use the European subsidy schemeIdeas Powered for Business SME Fund“. The grant scheme was initiated by the European Commission and aims to help small and medium-sized enterprises in the European Union protect their intellectual property rights.

Discount rates

For a trademark registration, you pay a fee to the office where you apply for the trademark. The subisidiarity rule applies to both national trademarks in the various member states and European trademarks. The basic fee for a Benelux trademark is normally at least € 244 and for a European Union trademark € 850. Thus, with the application of the subsidy scheme, the basic fee for a Benelux trademark is currently only € 61 and for a European Union trademark € 212.50. You also pay only a quarter of the application fee for additional classes. So a very generous subsidy scheme; especially if you wish to register several trademarks. The subsidy scheme is still valid until 8 December 2023, but under the condition: “first come, first served”. So don’t wait too long! More information on the terms and conditions of the SME voucher can be found here.

We like to think along with you

Do you need advice on applying for your trademark? For example, are you unsure about whether to choose a word or combined word-figurative mark? Or do you need help putting together the right goods and services for which you wish to apply for the trademark? We have a great deal of expertise about trademarks, are happy to think along with you and will work with you to ensure that your trademark soon enjoys optimal protection.

Also read our page on registering trademarks. There you can read all the things you need to consider when filing a trademark application.

The power of the prompter


In the previous episode, to answer the question of who owns the copyright to the creations of Chat GPT and Dall-E, we left off with the instructor, the person who whispers the prompts to these creation robots. Instructions such as (to CGPT): “Write a short explanation about the copyright protection of texts created by ChatGPT“. Or (to DALL-E): “create a painting of artists that struggle about ownership of a work of art, in the style of Salvador Dali.”[1]

Is giving such an instruction, a prompt, after which the machine does the work, sufficient to acquire a copyright on the result?

Well… About the instructions mentioned above, we can at this point already state that they are inadequate . They are so general in nature that you can still follow many paths when it comes to the design (yes, even within the style of Dali). All that is prescribed are the subjects, and with the painting there is also a style indication. For our purpose this is still too general.

concrete design

As we saw in the previous episode, copyright is all about concrete design. To my students, I sometimes summarize: “Protected is not what you say, but how you say it.” If five journalists were to independently write a report on the Eurovision Song Festival finals, these would contain many of the same facts and sub-topics. Still, there would be no chance of plagiarism. Everyone would have used their own descriptions, emphasized certain events, used a certain structure, used their own tone of voice. It is these elements that individually shape a text, that make it original, that put a personal stamp on it. The above mentioned instructions to the machine do none of that.

highly detailed, creative prompts

But exceptions can be imagined. I don’t dare to rule out that certain instructions could be so detailed, and make such clear creative choices within this detailing, that thinking up the instruction itself already meets the work-test (see part 1 for an explanation of this concept). Be aware that in such a case, it could still be in question whether that would immediately mean that the picture created on that basis would also yield a copyright for the instructor, but it is certainly a step in that direction.

A step, incidentally, for which it is difficult to determine where the lower limit lies. Prof Dirk Visser, who holds the chair of IP at Leiden University, has been working for several months to shed more light on this, among other things, with the help of a number of interesting propositions submitted to the vox populi (download pdf – in Dutch). The first of these is whether a picture created from the AI prompt “blue horse, purple dog & yellow hippopotamus” is copyrighted. Well, I think not. Even this instruction, although original, is still too little detailed to my taste.

the prompt as brush?

One might even wonder whether, at the current state of the art, AI programs are capable of expressing (or rather, in this example, “conveying“) such a level of detail, that the outcome of the question (using a necessarily more detailed prompt) could be different. This is also where my aforementioned second question comes in.

That this will one day be different, is nonetheless almost a certainty by now – developments are happening so fast. Once we get to that point, the instruction, the prompt, will have become the creator’s instrument, as it were, like a brush is for a painter and a camera (or even better: Photoshop) for a photographer. But the creator would then always need to place a firm personal stamp in his prompt (which remains a tall order to fill).

adding ones own touch

We have probably not yet reached that point at the moment. It is a lucky thing for the instructor however, that such a highly detailed prompt is not at all necessary to obtain copyright. Note that at this point in our search for a possible human copyright holder to an AI-creation, we have not found anyone yet. In fact, we may safely conclude that no copyright rests on the average AI-output. This is highly convenient for the instructor in this context. After all, this entails not only that anyone may freely reproduce and publish the AI-creation. But also that anyone may freely “build on” to it. In other words, anyone may edit the AI-creation to their heart’s content, give it their own (possibly only minor) spin. Post-editing (in text or image) does not require very much to add its own original character to the creation.

Under Dutch copyright law, we speak here of a so-called “reproduction in modified form”. Section 10(2) of the Copyright Act teaches us that such a reproduction is protected as an independent work, albeit without prejudice to the copyright on the original work. But if, due to the absence of human creative choices, the original creation is not copyrighted, that limiting conclusion (“without prejudice…”) does not come into play. The instructor is the first person to observe the still unprotected AI work – and, if he wants, the only one. This creates a unique opportunity to remain the only one to obtain copyright by means of a simple post-processing operation if, as at present, the “prompt” by itself will almost always be insufficient to reach that goal. This is now happening on an ongoing basis.

we have a winner

In short: We have a winner – even if they have to perform a little additional action themselves. More on modfified reproduction and the position of the modifier in the next instalment.

[1] This was the instruction for the picture posted with this blog: “Create a painting of artists that struggle about ownership of a work of art, in the style of Salvador Dali“. It was no coincidence that “the style of Salvador Dali” was chosen. In fact, the programme’s name DALL-E is, according to its creators, an amalgamation of the name of the Pixar robot WALL-E and Salvador Dali’s surname.


Who owns the copyright to an AI creation?


That artificial intelligence (AI) is now capable of producing products that can compete with “works of literature, science or art 1” no longer needs to be questioned. ChatGPT and DALL-E have become household terms in recent months. Almost everyone has tried these computer programmes (because that’s just what they are) by now, only to be amazed by the results. To wit: easy-to-read Dutch texts and beautiful pictures, apart from the occasional anglicism or a sixth finger on a hand.

a right for the machine?

When people produce something like this, the product is easily subject to copyright. As soon as it can be said that a product contains its own original character and the personal stamp of the creator, it meets the so-called “work-test” and protection is a given. So does a machine now get this protection?

We can quickly answer to that: No, a machine cannot own a copyright. The Dutch Supreme Court closed that road in 2008, in the famous Endstra judgment, by explaining what the above-mentioned elements of the work-test mean. The problem lies mainly in the element “personal stamp of the creator”. According to the Supreme Court, this means that the work must contain “a form that is the result of creative human labour and thus of creative choices, and which is as such the product of the human mind.”

A photo creation by DALL-E and a text by ChatGPT have no such form. Sure: both computer programmes have calculated themselves silly to create a form, but that form is neither the result of creative human labour nor a product of the human mind. So: exit DALL-E and ChatGPT as copyright owners. (Not to mention, of course, the fact that the law does not recognise them as persons, as legal subjects, as a result of which they cannot own any rights to begin with).

a right for the makers of the machine?

Their creators then, perhaps? The writers of the code underlying ChatGPT and DALL-E? No, those programmers do not own the copyright to the AI-creation either. They do, however, won a copyright to the result of their programming: the computer programmes they wrote are undoubtedly copyrighted. But once these programmes are running and, as a result, certain concrete design has been produced in text or image, we no longer attribute this to the programmer. They may have created the possibility that these programmes are capable of producing copyrightable works. But such a work is only protected once ot is there: when there is actually a sensory perceptible concrete design. With this concrete design, the programmers have nothing to do – that was the work of the machine.

the instructor then?

OK – not the programmer(s) then either, but what about the instructor? The person that gave the prompt to the programme that produced the work? Here we are already getting a lot closer to a suitable candidate. Not yet close enough, probably, but we can fix that… More on this in the next instalment of this series.

(The image accompanying this blog was created by Dall-E from the instruction: “oil painting depicting a robot that is confused about copyright”)

  1. the description of a copyrightable work in the Dutch Copyright Act (Auteurswet) ↩︎

Rules on artificial intelligence: the European Union is working on it!

Stephen Hawking predicted in 2014 that artificial intelligence might destroy humanity in the future… While that may still sound like science fiction, the fear of the consequences of artificial intelligence is increasing at the moment. This, of course, has everything to do with the rapid development of intelligent systems – of which Chat GPT in particular has been in the spotlight recently.

The end of humanity may still be a bit far fetched, but we already saw in the Benefits-affair (“Toeslagenaffaire“), for example, that AI systems can intentionally or unintentionally lead to discriminatory outcomes. Among other things, based on nationality, family composition and salary, an algorithm of the Inland Revenue decided who was checked manually. The Data Protection Authority announced in January that it would start additional monitoring of “life-threatening” algorithms.

And recently, 1100 techprominent people even called for a temporary brake on the development of artificial intelligence in an open letter. We should all take at least six months to think about how to plan and control the development of artificial intelligence with care. Right now, AI labs would be caught up in a race to develop increasingly powerful digital minds, and even their creators would no longer be able to understand, predict or reliably control those minds. Advanced artificial intelligence could radically alter the history of life on Earth, according to the letter’s signatories. Elon Musk, one of the founders of OpenAI and a co-signatory of the letter, announced – somewhat surprisingly – just last week that he was working on a new algorithm: Truth GPT; an alternative to Microsoft and Google’s algorithms.

Regulation artificial intelligence

Legislators have not been idle in recent years either, although there has been considerably less focus on this. The European Union has been working on a new regulation to regulate artificial intelligence since 2017, and it will be some time before it actually comes into force.

In the draft proposal, the European Commission considers that artificial intelligence can help achieve beneficial social and environmental outcomes, and provide important competitive advantages for businesses and the European economy. At the same time, the European Commission also signals that artificial intelligence poses new risks or negative consequences for individuals or society. Given the speed of technological change and potential challenges, the new regulation should result in a balanced approach in the field of artificial intelligence.

Main purposes

On the one hand, the regulation aims to ensure that AI applications are safe and comply with European Union values and, on the other, it regulates AI applications economically. In summary, the new regulation should:

Prohibited and risky AI systems

The draft regulation adds the deed to the recitals and immediately includes a long list of prohibited AI applications in Article 5. For example, AI applications that interfere with the behaviour of individuals without their awareness are prohibited by definition, or applications that take advantage of the vulnerability of specific groups – e.g. with disabilities – that are likely to cause physical or mental harm.

Article 5 also prohibits government agencies from using AI applications to assess or classify the reliability of individuals based on their social behaviour or personality traits if the resulting score results in unfavourable treatment of those individuals and is disproportionate to their social behaviour. Perhaps such a system sounds like something that could only exist in TV series like Black Mirror, but a social credit system is currently being hard at work in China. Lower social status in that system could, for example, make it harder to obtain a mortgage. And in 2019, it was even reported that millions of Chinese with a lower social score were prevented from buying air and train tickets. As far as we are concerned, a very welcome ban in the draft regulation.

The regulation also includes a definition for high-risk AI systems. These include, for example, when systems pose risks to physical or mental health or when systems pose risks to fundamental rights. Such systems must meet various technical conditions, among others. For example, they must incorporate a “risk management system” that identifies and analyses certain risks on an ongoing basis.

To be continued

In summary, developments in the field of artificial intelligence are rapid and concerns about the potential negative social consequences of artificial intelligence are high. At the same time, serious European legislation is in the pipeline within the European Union to regulate artificial intelligence to safeguard fundamental rights. We are following these developments with great interest and will no doubt blog about them more often in the coming period.

Transfer of personal data to third countries after Privacy Shield invalidated

Many companies based in the European Union transfer personal data to, for example, affiliated companies or third parties based in the United States. That ‘transfer’ of personal data did not pose a legal problem under the so-calledPrivacy Shield. However, the Court of Justice declared the Privacy Shield invalid overnight on 16 July 2020 in the Schrems II case. That landmark court decision created a lot of ambiguity regarding the legality of personal data transfers from the EU to the US. Therefore, on 10 November 2020, the European Data Protection Board (“EPDB“), or the European Data Protection Board, published a set of recommendations on the transfer of personal data to third countries. In this blog, we focus on the invalidated Privacy Shield and the EDPB’s recommendations.

AVG and third-country transfers

The General Data Protection Regulation is directly applicable throughout the EU and thus allows the transfer of personal data from one European member state to another without further ado. This is not surprising, as all Member States are, after all, supposed to guarantee the level of protection guaranteed by the Regulation. However, transfers of personal data to countries outside the EU, so-called ‘third countries’, are allowed to a much more limited extent. The transfer of personal data to those countries requires an adequacy decision taken by the European Commission or other ‘appropriate safeguards’. In the case of an adequacy decision, the European Commission decides that an entire country, sector or organisation provides an adequate level of protection for the processing of personal data. Once an adequacy decision has been made, the transfer of personal data can take place without obstacles. If an adequacy decision is lacking, the transfer of personal data is only possible if appropriate safeguards are provided. These must be provided by the controller or processor itself to ensure that data subjects have enforceable rights and effective legal remedies. As appropriate safeguards, the AVG lists, among others, standard data protection clauses (also known as model contracts) adopted by the European Commission, an approved code of conduct or binding corporate rules.

Privacyshield and Schrems II

An adequacy decision previously existed for the transfer of personal data to the United States. The European Commission thereby allowed organisations in the European Union to exchange personal data without obstacles with US-based organisations affiliated to the so-called “Privacy Shield”. However, the European Court of Justice invalidated this adequacy decision on 11 July 2020 in the Schrems II case. Briefly, the court reached that decision because US legislation allows intelligence and security agencies the right to access and use data of European Union citizens, without strictly necessary. In doing so, the court concluded that the transfer of personal data to the United States was not in line with the requirements of the AVG. This landmark court decision created a lot of confusion. Indeed, from one day to the next, this meant that the common transfer of personal data to the US was no longer in line with the General Data Protection Regulation.

EDPB recommendations

In an attempt to clear up this ambiguity, the EDPB published on 10 November 2020 a set of recommendations for the transfer of personal data to third countries. The EDPB notes that, among other things, standard clauses (aka model contracts), an approved code of conduct and binding corporate rules have not been declared invalid by the ECJ. However, this does not mean that their use is automatically an appropriate alternative to the invalidated adequacy decision. Model contracts, an approved code of conduct or binding corporate rules can provide a basis for the transfer of personal data to third countries, but when using them, you need to be aware of a number of issues on an ongoing basis. The EDPB provides the following roadmap as an aid in this regard:

  • Step 1: Be aware of the personal data you transfer to third countries;
  • Step 2: Know which transfer instruments you use to transfer that personal data to third countries;
  • Step 3: Assess whether there is anything in the law or practice of these third countries that may impair the effectiveness of the appropriate safeguards of the transfer instruments used;
  • Step 4: On this basis, determine whether additional measures are needed to bring the level of protection of the transferred personal data up to the EU standard;
  • Step 5: Take all procedural steps necessary for the adoption of the additional measures. For example, in some cases this may require you to consult the Personal Data Authority;
  • Step 6: Regularly evaluate the level of protection of the personal data you transfer to the third country. In doing so, keep track of any developments that may affect it.

It should be clear that the transfer of personal data to the United States is no longer as self-evident as it was under the Privacy Shield. The recently published recommendations of the EDPB are only a tool to implement personal data transfers to third countries as effectively as possible. If your organisation has to deal with the transfer of personal data to third countries – and this is more likely than you may think – we would be happy to help you think through how to comply with the requirements of the AVG!