Major impact on online marketing and analytics |Third-party cookies

ITP 2.1: What is changing and how do we deal with it?

Apple has announced plans to sharpen their ITP (Intelligent Tracking Prevention) regulations for their Safari browser. ITP version 2.1 is now live and instantly has a major impact on digital marketing and analytics due to its handling of third-party cookies. Firefox has indicated a similar tracking prevention, also cracking down in first-party cookies in addition to third-party ones. In this blog we bring you up to speed on what the tracking preventions means for organisations and how we have resolved this for the users of our Datastreams Platform.

What is ITP?

ITP stands for Intelligent Tracking Prevention. It represents Apple’s stand against online tracking and has been causing concerns for companies applying personalised marketing since its first incarnation. The first version started by limiting the possibilities for placing third-party cookies, with later releases increasingly limiting the potential for workarounds and alternatives. The previous version 2.0 blocked the placement of third-party cookies altogether. First-party cookies were largely unaffected by ITP. Until now, with the release of ITP 2.1.

What is changing?

The most important change for organisations engaging in digital marketing in ITP version 2.1 is the way that both first and third-party cookies will be handled. After the update, first-party client-side cookies created through JavaScript’s document.cookie will expire after seven days. Third-party cookies created by domains other than the current website continue to be blocked, as was the case in ITP 2.0.

Where the blocking of third-party cookies had severe consequences for marketeers, the blocking of client-side first-party cookies has the potential to significantly impact analytics. Since site visitors who return after seven days will no longer be counted as returning visitors, current solutions for assessing conversion tracking based on these cookies risk breaking down.

What are we doing about it?

Currently, the solutions to ITP 2.1 are two-fold: first, drastically limit reliance on third-party cookies. DimML, the language at the core of the Datastreams Platform, already enables our users to do this by allowing a script to be delivered through the same domain as the webpage from which it was loaded. The second solution is to place first-party cookies through a server-side method instead of through the client-side document-cookie implementation.

We’ve released a new component within our platform that will allow our customers to integrate our complete Datastreams Platform with all the capabilities within their own domain. This means that the Datastreams Platform is a part of your IT architecture and not a third party application. Data ownership and compliant data management is at the core of our architecture so it will not be effected by ITP2.1.  A core differentiator to many SAAS marketing technology or consent management providers, we give you full control how to manage your first party data, accurate and compliant data ownership driven by our state-of-the-art data architecture.

As the data and privacy landscape continues to change, we will continue to ensure the users of our Datastreams Platform can perform data analysis in an easy, secure and compliant manner. Do you want more information about how we are dealing with the ITP 2.1 update? Contact us!

Clean your database and esnuring that data is valid, complete, stored in the right places and accurate across the organisation.

Why you should (not have to) clean your company database

Spring is here, which means it’s time for a thorough spring cleaning. Aside from cleaning out the unnecessary papers from those clogged filing cabinets, consider turning your attention to your company database this year, because according to recent studies towards the data practices of contemporary organisations you probably need to clean your database.

In a world where companies are growing increasingly data-driven, business success increasingly depends on analytics based on large quantities of high-quality, trusted data. While many organisations are succeeding in acquiring large amounts of data and applying analytics to them, data quality often leaves a lot to be desired. In a study conducted by Experian, 95% of organisations indicated experiencing wasted resources and unnecessary costs due to poor quality data. This is not surprising, since organisations on average believe 29% of their data to be inaccurate, and as is often said in the field of data science: ‘Garbage in is garbage out’.  

It is clear from the percentages above that, statistically, it is highly likely that your company can benefit from a good spring cleaning of your database. Ensuring that data is valid, complete, stored in the right places and accurate across the organisation empowers you to trust your data again. This means you won’t have to waste time and money on marketing campaigns that are based on unreliable analytics. However, cleaning your data can be very time consuming, especially if your data infrastructure is not designed to be managed easily by business professionals. Additionally, data will need to be cleaned regularly to keep your data environment healthy and useable. Luckily, a good data quality monitoring & assurance solution can make your life a lot easier by preventing dirty data from entering your database in the first place and making cleaning a lot easier.

Data professionals know that data cleaning is a key part of any database management strategy. However, just cleaning your data periodically is not enough. If you don’t ensure data quality at the source, polluted data will continue to build up between cleaning sessions, potentially throwing off your analytics. That is why a strategy for validating data at the source, before it is analysed or enters your database, is crucial. Our data Quality and Assurance module increases the overall quality of your data-ecosystem by ensuring  only quality data enters your database and it continuously monitors your data streams to ensure they continue to supply data that is complete and of high quality. This, together with the streamlining and seamless integration of data streams in your company by the main Datastreams platform, ensures companies have a clean and orderly environment to manage their data in.

Implementing our solution does not mean you won’t ever have to clean your data (cleaning is imperative for keeping your data up-to date and removing data you no longer need), but it makes these periodical cleanings a lot less time-consuming. Want to know more about our data Quality and Assurance module and how it works? Visit our page about it.

Talk with Martijn Lamers (Fontys) about data science

Martijn Lamers: a talk about company involvement in contemporary education

As a young, innovative company we love to invite students into our office and have them work with us on one of our projects. Which is one of the many reasons we work together with the Fontys University of Applied Sciences. Martijn Lamers is a teacher and is director of the minor data science. In the context of ‘company case’ assignments, he has supervised many of the students during their time working with us. We invited him for a good talk on data science, student projects and the future of education.

It is clear that Martijn Lamers has a heart for data science, people and teaching. With a background in both psychology and IT, he has a solid understanding of both people and data science. As a company always concerned with keeping the ‘human touch’ in big data alive, we instantly feel a kind of kinship with this well-spoken lecturer.

The role of companies in education
We believe that there is an important role to play for companies in education, now and in the future. Lamers agrees with us, telling us about the so-called ‘proftaak’: a practical project by a group of six students to put their skills into practice. “In this project, students need to run a project from beginning to end, from data collection to reporting. Everything comes together, it’s no longer fragmented and theoretical.” Lamers explains. He tells us that these projects are often done in collaboration with companies, for good reasons.

Lamers indicates four reasons for collaboration between education and companies. The first is that finding enough big, interesting and available datasets for several groups of students is not easy without involving companies. Secondly, he explains that companies know what is happening in the industry, making student projects completed for a company fit better in the contemporary industry. Thirdly, students working with companies makes students feel like they are contributing something to society. “The project doesn’t just disappear in the bin after it’s finished.” He jokes. “It’s much more fun for students to work with real data in a real company, solving real problems.”

Additionally, working for a company is an important part of a student’s personal development. “At some point you need to break through the passive mentality of a student waiting for an assignment.” Working in a company is a good way to teach students a new ‘working’ mentality that they will need when they start their careers.

Finally, Lamers also takes explains why companies are motivated to work with Fontys: “Both parties benefit from the project: companies get access to young, motivated students to help them with their projects and there are limited costs and it allows students to learn from companies and work with real data.”

The future of education
Regarding the future of education, Lamers clearly envisions the role that companies will continue to play. “I think the role of companies will become bigger in the education of the future.” He explains. “At Fontys, we now start inviting companies into the classroom earlier than we did before. It gives students access to the knowledge held by industry professionals.”

It does affect the role as teacher. “Students are sometimes left to ‘figure out’ a lot for themselves, especially when companies are involved. That is not necessarily bad, but it is still important for students to be taught theory and be guided by a teacher.”

Working with Datastreams
We are proud to be one of the companies that Fontys can turn to when a group of students needs a good dataset or exciting project to work on. In fact, five groups have worked with us since last year, with positive results. When asked for his feedback on his and students’ experience in working with us, Lamers praises the fact that we like to provide sizeable sample sets within a short timeframe, allowing students to get to work quickly. Some students have chosen to continue working with us after the project ended, are clear signs that we are doing the right thing!

We understand that, more than ever, to get the most out of their talents, students need to work with real data on real projects in innovative companies. Why not start a collaboration today?

Talented & Young, our reasons why we working with students

Young & Talented: 5 reasons why we love working with students

We are Datastreams, and we love data. We help companies to collaborate with data and create new opportunities. We are always looking for talented students to join our team of data scientists. Why students? Because we love working with people who are like us: smart, talented & ambitious. Want to know more? Here are five more reasons why we love welcoming students to our office.

Students are willing to learn

We are always looking for young talent to share our knowledge with. In our experience, students are open to learning new things and less likely to get bogged down in their presuppositions about how things should work. A willingness to learn and the ability to adapt to new situations is the best skill you can have. You can gain experience over time, after all.

Students are passionate

Students are most passionate about the work they do and highly motivated to use their skills to solve actual challenges in a real business. Student passion is not just a great way to contribute to our projects and it keeps reminding us of why we love what we do.

Students are technologically savvy

As a data science company, we are no strangers to new, innovative technology. In fact, we have developed our fair share ourselves. However, many current students have grown up in a world permeated by IT in every facet of life, making navigating websites and applications second nature. A natural affinity with technology combined with quality education about current and future trends, means students are more equipped than ever to work with complex IT-applications; both now and in the future.

Students are willing to take risks

More so than the generations preceding them, students are willing to take risks. They are willing to go abroad to make memories or to give up a comfortable place to live to find their own footing. It is this willingness to take a gamble that we like in students: instead of working on tried-and-true projects that are industry standards, students are willing to (and often want to) try new, innovative solutions. This makes students perfect candidates to work on more unorthodox, experimental projects. Sometimes all it takes is someone willing to take that leap of faith to get amazing results.

Students are fun!

It’s not all business. The final reason why we like working with students is that students always bring life and energy into our office. Most of our team consists of young adults who still know what it was like to be a student and enjoy interacting with them. Whether they play in our office foosball tournament, organise ice-skating trips or just regaling us with stories about the student-life, it’s always more fun when we have a student (or two) in our office.

Who will be our next student colleague?

Reasons why students love working with Datastreams

Why students love working with us

At Datastreams we always got some students helping around the office. We already discussed the reasons we love to work with these students in our blog ‘5 reasons we love working with students’. However, every story has two sides, so let’s look at the reasons students love working with us (according to two of our current student employees).

1. Our students work on varied projects with real data

Students have told us that one of the best parts of working with us is the opportunity to work with real data. Students working with us get the opportunity to work on a variety of projects for real clients. This doesn’t only allow students to experience the trials and tribulations that can come with working with real data, but also to see their projects implemented by companies: “Many of the projects I’ve worked on are still being used.” One of our employees told us. “It felt good to work on real, useful projects in addition to studying.”

2. Our office is a great learning environment

Learning about data science and IT at school is very valuable, but our students often tell us that being immersed in our data-driven environment is a fantastic learning experience. Data is the lifeblood of our company; it is our core business and ingrained in everything we do. Maybe it is because of this that students tell us that they learn a lot by listening to our data-professionals and working on innovative projects. Got a question about data science? Great chance someone in our office knows the answer.

3. We offer flexible working hours

From our experience with students, we know that sometimes they have a lot of time to work and sometimes it’s exam week and they are completely swamped. We emphasize. In fact, many of us still vividly remember it. That’s why we offer our students flexible working hours and the possibility to work from home.

4. Our office is never boring

Our office is full of young, enthusiastic, friendly people who are as happy to talk about data science as they are to share a beer or play foosball. That’s why it is never boring in our building.

Are you interested in working with us or do you want to know more about who we are and what we do? Don’t hesitate to contact us!

Data collection and privacy activities, Datastreams helps you

Dear Santa, people don’t want to be on your list anymore

Dear Santa Claus, last year we expressed our concerns about your data collecting activities. We advised you to make some big changes to avoid being fined under the GDPR. One of the changes we suggested was to ask people for consent before tracking them with your Elv3s software. We know that these lists are an important part of your business and to help you give everybody the perfect personalised present, but we think it really might be time to change with the times. Because, Santa, as it turns out, more and more people don’t want to be on your list anymore.

According to data gathered across our own platforms, the number of people indicating not wanting to be tracked is steadily increasing. We have observed a 26% increase in do not track headers in the last three months across our platforms. It seems clear that many people wish not to have their behaviour tracked. We know that you understand, more than anyone else, the importance of granting people’s wishes. It is how you earned your jolly reputation, after all!

We understand that finding the perfect presents for people who don’t want to be on your list might be a bit more difficult. However, we are sure you will find a way to make everybody smile this Christmas, whether they are on your list or not. Merry Christmas, Santa!

Ps. If you need some help getting your consent practices up to date, we are happy to help you. That’s our gift to you, Santa!

Data changing the world, quality, analytics, privacy

The world of data is changing

The world of data is constantly changing and evolving. New technologies, legislations and policies pressure companies to re-examine the way they deal with data. Because it’s better to be prepared than to be surprised; here are five interesting ways of how the world of data is changing.

1. More and stricter legislations

The General Data Protection Regulation (GDPR) was not the first legislation cracking down on irresponsible data use, and it certainly won’t be the last. Government and non-government institutions around the world are establishing new policies and laws for processing data in a more ethical, transparent and secure manner. Some examples of recent laws are the California Consumer Protection Act (CCPA), the Indian Data Protection Bill and the Brazilian General Data Privacy Law (Lei Geral de Proteção de Dados Pessoais or “LGPD”). Even in Africa, a continent where more than half the countries have no data protection law, change might be on the horizon with Kenya drafting a new law to protect customer data. With a future of increased legislative pressure, solutions built for compliant consent and encryption are becoming increasingly important.

2. People are more aware than ever

The days that data subjects were ignorant of the data being collected about them are gone. In May 2018 the Global Alliance of Data-Driven Marketing Associations (GDMA) published their research on global privacy attitudes, based on a survey conducted in November 2017 across ten countries. The report showed that while the majority of the respondents was prepared to share their data, 74% of respondents reported being ‘concerned’ about their online privacy, with 83% indicating wanting more control over the data they share.

With incidents like the scandal of Facebook and Cambridge Analytica occurring earlier this year and the GDPR coming into effect, awareness of online privacy has only increased. A study by Janrain conducted in the US showed that 57% of respondents had increased concerns about their data privacy as a result of the Cambridge Analytica scandal. Additionally, according to a survey by SAS, a quarter of consumers in the UK and Ireland have already exercised their GDPR rights. It is clear that people have woken up to the issue of privacy and will likely grow in their understanding and awareness of online privacy issues. Since trust is the foremost reason that customers are willing to share their data with companies (as reported by the SAS survey), building trust through transparency is key in ensuring customers will continue to share data, even if new legislations and policies give them increasingly more power to stop doing so.

3. Privacy-conscious browsers are on the rise

On August 31st, Mozilla announced that it would start implementing changes to the Firefox browser to protect their users’ privacy. Future versions of Firefox will block web trackers by default, meaning users won’t need to take any action to prevent companies from following them across the web. Firefox is not unique in offering this do-not-track option, but is unique in making it the default option. While Chrome is still overwhelmingly the most frequently used browser, the focus on privacy by its number-one competitor combined with doubts about its own incognito privacy mode, might cause privacy-conscious individuals to make the jump.

In addition to established browsers like Firefox making changes to ensure user privacy, new browsers with a focus on privacy also appear to be on the rise. TOR has long been a popular choice to avoid tracking, but other browsers like Epic and Brave have also opted to target the privacy-conscious market. The latter makes use of the anonymous search engine DuckDuckGo and integrates with TOR to make private browsing via its ‘onion network’ easy and fast. On August 28th, Brave announced surpassing 10 million downloads on Android (up from 1.5 million in April).

While most internet users likely won’t make the switch from Chrome to a different browser soon, data analysts and marketers would do well to look at how other browsers are implementing privacy measures, if only to know what to expect when Google is put under pressure to make similar changes.

4. More advanced predictive analytics

The way we collect data may be changing, but so are the things we can do with our data. Predictive analytics using artificial intelligence, deep learning and machine learning are increasingly finding their place in marketing, allowing marketers to not only look to the past, but also into the future when using data. Technologically savvy companies use these advanced techniques to predict customer behaviour, identify potential leads and target customers at the right time with the right products. Research suggests that the investment is worth it: companies using predictive analytics are twice as likely to identify high-value customers, according to a study by Aberdeen Group.

Under the GDPR, many companies are faced with a push towards (partial) anonymisation of data. Technologies such as machine learning might grow in popularity as less data contains personal identifiers. This is because aside from predicting the value or behaviour of a single prospect, these technologies also work well on aggregated datasets without personal identifiers. In this way, they are useful for predicting the behaviour of groups, making them invaluable for predicting various types of market trends. Under the GDPR, then, predictive modelling using machine learning might prove to be instrumental in helping companies deal with larger amounts of (pseudo)anonymous data.

When touting the possibilities of technologies such as machine learning, we would be remiss not to remark upon a possible issue of machine learning in line with the GDPR.  Under the GDPR, customers have the right to be informed of how their data will be used and to opt-out of automated decision-making practices. Some experts have suggested that this is difficult to gel with machine learning, as machine learning models are generally not concerned with why specific choices are made. According to critics, constantly adapting ‘black box’ machine learning models cannot be adequately explained to data subjects, making informed consent for data collection impossible. We believe that predictive analytics, including machine learning, are still very much possible under the GDPR, but anyone working with these technologies should be mindful of issues like these and handle them with appropriate care.

5. Focus on data quality, not quantity

Despite the seemingly limitless promises of so-called ‘big data’, many marketers are still drowning in data. Partially to blame for this is the misconception that ‘more data is better’. While this is true to the extent that most predictive models work best with large amounts of data, the importance of data quality should not be underestimated. Both scientists and business experts are now pressing the importance of quality data, the latter stating that bad data can have many negative effects, like wasting time and increasing costs.

Quality data is more important than ever as machine learning becomes more popular. For a model to accurately learn and predict customer behaviour, the data set used to train the model needs to be as accurate as possible. Many companies currently have access to fairly large datasets, but many of these are messy and unorganised. As the use of advanced data mining techniques in marketing and analytics increases, it will be the companies that focus on data quality, instead of just quantity, that will have access to the most reliable models and all the valuable insights that come with them.

Change can be scary, but it can also be good. It’s the organisations that anticipate changes and plan ahead that thrive in data-driven industries. What changes do you anticipate and how are you preparing for them?

Data sharing and collaboration platform, Datastreams

The prisoner’s dilemma of data sharing (and how to solve it)

An intro to the prisoner’s dilemma

Bad news: you and your partner are picked up for a crime and put in separate cells. Each of you is given the option to either stay silent (cooperate) or rat out their partner (defect / not-cooperate). If you choose to cooperate and your partner does as well, you both get one year in jail, but if your partner doesn’t cooperate and rats you out, you get three. On the other hand, if you choose to rat out your partner yourself and they stay silent, you get to walk free. However, if both of you choose to talk (not-cooperate), you both get two years. You are not particularly close with your partner and want to minimize your own time in jail. What do you do? If you need to think it over, a representation of the options and results is given below.

Model data sharing cooperate /defect, Datastreams

Upon pondering the situation, you and your partner find yourself in, you likely discover that defecting is the most attractive option every time. Regardless of what your partner chooses, you will always be better off if you choose to defect (not cooperate). Because your partner thinks the same way, he will not collaborate either. As a result, you will end up in a suboptimal situation where there is no cooperation, while both of you would have been better off if you had both chosen to cooperate.

The prisoner’s dilemma has been applied to many decision and cooperation problems over the years, but not yet to the concept of data-sharing in a B2B context. A shame, since insights in the prisoner’s dilemma can teach us a lot about why we do not share our data, and how we can start remedying that. In this think piece, we examine a modified version of the prisoner’s dilemma called the ‘data sharing dilemma’.

The data sharing dilemma

While you and I may not find ourselves in a criminal situation any time soon, we find ourselves in a prisoner’s dilemma of our own when deciding if we want to share data with other companies. This dilemma is visualized as the ‘data sharing dilemma’ in table 1.

The basic premise of the setup is that the direct cost and risks of sharing data or insights with another company amount to -1, while having a partner that cooperates with you in this way imparts much larger gains, represented by our number ‘6’. This cooperation can be the company sharing their data with you, or sharing insights based on the data you’ve shared with them.

Model data sharing cooperate / don't cooperate, Datastreams

Clearly, if both companies cooperate, both pay the price for sharing data or insights (-1), but gain the gains as well (5), amounting to a scoreboard of +5/+5 for each company. However, if one company cooperates (for instance: provides data), but the other does not reciprocate by sharing their own data or insights, the cooperating company is left with just the costs and not the gains (-1/6), while the other company has just the gains of data shared with them and none of the costs. To stay safe, we might choose not to cooperate at all, much like the prisoners in our story (0/0). From a rational perspective, this does seem to be the most logical choice.

In the end, our choices come down to answering a few questions: Do we trust other companies to collaborate with us, or will they only use our data to further their own business? Are we prepared to pay the costs of sharing our data for future gains beyond our control? Are we prepared to take the risk of cooperating, or do we stay in our safe yet suboptimal situation of not doing so? The fact that the potential of data sharing and collaboration has been widely acknowledged, but that the process itself is still fairly rare, seems to answer our question. While this seems a pessimistic view on the future of data sharing, the prisoner’s dilemma can be beaten by tweaking the situation the decision makers find themselves in.

Solving the prisoner’s dilemma

If we want to encourage data sharing, we need to understand how to break through the stalemate created by the prisoner’s dilemma. There are a few ways to do so, which can be applied to encourage data sharing collaboration in a B2B context.

1. Reduce risks and costs of data sharing

Sharing data often comes with a variety of costs and risks. In our modified example, this is represented by the -1 in our matrix. In the instance that both parties choose to collaborate, these costs and risks are compensated many times over by the results. However, if this doesn’t happen, data sharing only incurs costs and possible fines for the collaborating party, while the other party profits from a large reward without having to pay this cost. Why would anyone want to pay these costs without a guaranteed payoff?

Model data sharing cooperate / don't cooperate, Datastreams

This part of our dilemma holding back data sharing, can be solved by mitigating the cost and risk of data sharing. If we can make cooperating virtually ‘free’ (that is, without substantial cost and risk), this will lead to a different scenario. Since there is no significant cost to sharing, the ideal gains in our ‘cooperate’ situation are the same as the gains in the ‘don’t cooperate’ situation, since no costs needed to be subtracted. It is also less risky to share data, as is shown by the lack of minuses in the table.

By reducing the costs and risks of sharing data or insights, we can lower, or even remove, that early hurdle that prevents cooperation. Easy, safe ways of sharing, then, are part of the solution towards ensuring more data sharing and data collaboration. Solutions and platforms for sharing data in this way are one the rise, and we are happy to do our part with our Data Stream Manager. The DSM allows companies to share data easily and safely, among other features allowing the sharer to determine what data can and cannot be used for.

2. Establish contracts and/or outside regulation

One of the problems of the classic prisoner’s dilemma is that there are no real repercussions to not cooperating. If these are added (for instance by imagining a scenario of sequential prisoner’s dilemma’s where contestants indicate using a tit-for-that strategy, in which they will betray the next round if they are betrayed), cooperation is much more frequent.

Model data sharing cooperate / don't cooperate, Datastreams

In table 2, we see a similar issue in the data sharing dilemma: there are no repercussions for not cooperating, even when a partner does choose to do so. A company that cooperates by sharing information or still gets the same rewards as a company that only uses shared data for its own gains without sharing its own data or insights. We can overcome this by making non-cooperation costlier, reducing the gain for non-cooperation below the one for collaboration. This is the situation presented in table 3. Clearly, the rational choice in this table is to cooperate.

We can create the situation in table 3 by setting up a good contract or data sharing agreement between businesses. This allows companies to specify how and when data can be shared between businesses, and apply penalties to ‘non-cooperative behaviour. If companies beforehand specify that they will both share data with each other, which will be processed under a set of conditions, an enforceable data sharing agreement ensures that notdoing so comes at a cost. By coming to an agreement about data sharing with other companies before sharing data, cooperation becomes a more attractive option over non-cooperation. While this essentially solves the data sharing dilemma before it truly happens, companies need to be willing to sign such an agreement in the first place. For this to happen, the third point solution to the dilemma is invaluable.

3. Build trust between companies based on shared motivations, goals and values

The third solution to the prisoner’s dilemma is one which theoretically works even without the previous solutions: mutual trust. Two friends placed in the prisoner’s dilemma will be much less likely to betray each other (that is, not collaborate with each other). In the same way, establishing a foundation of shared values with your fellow company is a strong force in ensuring cooperation and collaboration. When approaching another company to set up a data sharing agreement, make sure that both parties understand where the value is in collaborating and how both parties can profit from it. Truly cooperative data sharing will be much more likely under these conditions, especially when supplemented by the solutions above.

Continue to think about collaboration

Choosing to share your company’s data is not always an easy choice; it often incurs costs and risks. Additionally, it might not always be the most rational choice, as we see in the first version of our data sharing dilemma. However, the potential of data cooperation and collaboration is great, as evidenced by a the ‘study on data sharing between companies in Europe’ commissioned by the European Commission. Therefore, thinking about data sharing and what is holding us back from practicing it more, is a worthy avenue to explore. This think piece demonstrates just one way of thinking about data collaboration and cooperation, using a well-known (thought) experiment on collaboration. We acknowledge that this text presents a somewhat oversimplified scenario of the complex issue of data sharing, but hope that this will trigger you, the reader, to think about this topic in a different way. We invite everyone to continue the discourse with us about the how’s and why (not)’s of data sharing, cooperation and collaboration!

GDPR Things for processors processing customer data

GDPR for Processors: Four things you should know (in six minutes)

The General Data Protection Regulation (GDPR) brings big changes for organisations processing customer data. To comply with the new legislation, it is crucial you understand where your responsibilities lie with the new legislation. In this blog, we present the four most important things a data processor should know about the GDPR.

1. The GDPR is a regulation that (also) applies to processors

One of the seemingly small, but very important differences between the GDPR and the Data Protection Directive, is that the GDPR is a regulation instead of a directive. The regulation status of the GDPR means it exerts a legally binding force on all member states. Concretely, this means the regulation applies the same way across all EU Member states.

The other difference between the Data Protection Directive and the GDPR, is that the GDPR places direct obligations on data processors for the first time. As data processor, you will be responsible for ensuring compliance, or risk being held liable by controller or data subjects and being fined by the authorities. Since controllers will be looking for compliant processors, demonstrating this compliance is also key to continue working with controllers at all!

2. The GDPR augments the rights of the subject

As data processor, you are generally less affected by the rights of the subject than data controllers. However, it is still important to understand the rights of the subject under the GDPR, as you will be expected to assist your controllers in respecting them in whatever way possible.

Under the GDPR, data subjects have the right to receive a copy of data being stored about them and can request data to be rectified. They can also object to the processing of their data and withdraw their consent at any time. Work with your controllers to streamline the procedures of removing and rectifying data or dealing with withdrawn consent to help them respect subject rights.

3. GDPR-compliance requires focus on some key areas

Many overviews of the GDPR are very extensive, including aspects of the regulation that might not be relevant for you as a data processor. There are, however, plenty of important changes you might need to implement as a data processor. We name five of the steps most data processors will have to take on the road to compliance. An extended list of actions can be found in our whitepaper.

  • In many cases you’ll have to designate a data protection officer (DPO) and communicate their contact details to the supervisory body. Even when not required by the GDPR, appointing a DPO is a good idea. This data protection officer is involved in all issues relating to the protection of personal data and holds an independent position in the company.
  • Ensure that no processing takes place on personal data except on the controller’s instructions. Make sure that this is common knowledge across your company, to prevent any natural person working for the company from doing so unknowingly. Additionally, ensure you do not engage with another processor without authorisation from the data controller.
  • When working with a controller, you should enter into written contract with the data controller to specify processing activities and duration. An example is entering into a “Data Processing Agreement” (DPA). Any sub-processors will be subject to the same contractual data protection obligations as between the first data processor and data controller.
  • As processor, you should provide sufficient guarantees to controllers that appropriate technical and organisational measures for GDPR compliance are implemented. Additionally, processors should ensure a level of security appropriate to the risk posed by data processing.
  • If you employ more than 250 people, you are required to maintain written records of processing activities. These records must contain specific information (specified in the GDPR) and be made available to supervisory authorities.

4. Non-compliance can have serious repercussions

We talked before about how non-compliance can have serious repercussions for data processors. Under the GDPR data subjects have the right to lodge complaints about data processing and, crucially, can hold the processor liable. Specifically, you can be held liable for the damage caused by processing where you have not complied with the GDPR obligations, or where you have acted contrary to the lawful instructions of your data controller. Finally, just like controllers, fines up to €20,000,000 or up to 4% of global turnover can be imposed on non-compliant organisations.

The GDPR is a complex legislation, and this blog by no means offers an exhaustive overview of its content. Cooperation between your legal department, IT department, upper management and outside professionals is key to getting to grips with the GDPR in time. At Datastreams.io we are happy to do our part, providing our Data Stream Manager and Consent manager. These solutions allow you to manage data streams and consent in your company in a comprehensive and structured way, so you can get one step closer to GDPR-compliance.

GDPR Things  for controllers collecting and processing customer data

GDPR for Controllers: Six things you should know (in six minutes)

The General Data Protection Regulation (GDPR) brings big changes for businesses collecting or processing customer data. To comply with the new legislation, it is crucial you understand where your responsibilities lie with the new legislation. In this blog, we present the six most important things a data controller should know about the GDPR.

1. The GDPR is a regulation that (also) applies to processors

One of the seemingly innocuous, but very important differences between the GDPR and the privacy directive, is that the GDPR is a regulation instead of a directive. The regulation status of the GDPR means it exerts a legally binding force on all member states. Concretely, this means the regulation applies the same way across all EU Member states.

The other difference between the privacy directive and the GDPR, is that the GDPR holds processors responsible for processing data in a compliant way. Data processors are required to demonstrate compliance to GDPR regulation to avoid fines. Additionally, data controllers are only allowed to work with processors who provide sufficient guarantees towards doing so. This means your processors will likely be more motivated towards ensuring compliant processing, but it also highlights the importance of carefully selecting your processors.

2. Compliance with GDPR-principles must be demonstrated

The GDPR contains several important principles that you need to understand and incorporate into your own business practices. Crucially, you will also need to actively demonstrate your compliance with these principles. The first set of principles concerns the data protection principles. These principles ensure processing is fair and transparent and that no unnecessary data is collected, processed or stored. Additionally, you’ll need to demonstrate lawful processing. This means that processing has to be based on one of the grounds for processing, such as consent or contracts. If you use consent as your processing base, you need to ensure it is through a freely given, specific, informed and unambiguous indication of the data subject’s wishes. Finally, you must make reasonable efforts to verify parental consent.

3. The GDPR augments the rights of the subject

One of the reasons why the GDPR is a good regulation for data subjects, is that it improves upon their rights. It’s important that data controllers understand what rights data subjects have and ensure these rights are respected.

Under the GDPR, data subjects have the right to receive a copy of data being stored about them and can request data to be rectified or erased. They can also object to the processing of their data and withdraw their consent at any time. These are just a few of the rights a subject has, but they are enough to show the amount of power data subjects have over their data after you’ve collected it. Make sure you communicate these rights to your customers and respect them at all times for compliant processing and a good customer relationship.

4. The GDPR is also about communication

The GDPR is not just about how you handle data, it’s also about how you deal with people. The regulation requires you to communicate with your data subjects in a concise and transparent manner regarding your data collection activities. Additionally, you need to provide customers with information such as about your company, processing purposes and contact details when collecting their data. Also, make sure you communicate requested information and any rectification or erasure of personal data to your customers. Finally, be prepared to inform your data subjects without undue delay of a personal data breach.

5. GDPR-compliance requires focus on some key areas

The GDPR is a broad legislation, touching upon many different areas of data processing. Exactly which changes you have to make depends on the structure of your company and a full list of possible actions would be quite long. We have, however, compiled five key areas that you should focus on. See our whitepaper for an extended list of possible actions.

  • In many cases you’ll have to Designate a Data Protection Officer and communicate their contact details to the supervisory body. Even when not required by the GDPR, appointing a DPO is a good idea. This data protection officer is involved in all issues relating to the protection of personal data and holds an independent position in the company.
  • Implement appropriate technical and organisational measures to ensure appropriate security and demonstrate processing is in line with the GDPR regulations. You should also become familiar with the principles of data protection by design and default, implementing data protection principles in every part of handling customer data. Crucially, as a controller you should make sure your processors do so as well.
  • If you employ more than 250 people, you are required to maintain written records of processing activities. These records must contain specific information (specified in the GDPR) and be made available to supervisory authorities.
  • When working with a processor, make sure to enter into a written contract to specify processing activities and duration. Ensure this contract specifies important GDPR obligations, such as that processors are may only act on our instructions.
  • carry out a Data Protection Impact Assessment (DPIA) prior to carrying out potentially high-risk processing, and seek the advice of its DPO while doing so. If you don’t take measures to mitigate the risk, supervisory authorities should be consulted.

6. Non-compliance can have serious repercussions

We don’t want to scare you, but non-compliance with the GDPR can turn out to pose a big threat to your business. Under the GDPR data subjects have the right to lodge complaints about your data processing. Additionally, controllers are liable for damages caused by non-compliant processing and data subjects might have the right to receive compensation. Finally, fines of up to €20,000,000 or up to 4% of global turnover can be given to non-compliant organisations.

The GDPR is a complex legislation, and this blog by no means offers an exhaustive overview of its content. Cooperation between your legal department, IT department, upper management and outside professionals is key to getting to grips with the GDPR in time. At Datastreams.io we are happy to do our part, providing our Data Stream Manager and Consent manager. These solutions allow you to manage data streams and consent in your company in a comprehensive and structured way, so you can get one step closer to GDPR-compliance.