People, machines, sensors, and devices are generating unimaginable amounts of data — every minute of every day. Cloud computing, social media platforms, and smartphones are everywhere. Everyone is connected — 3 billion people online, 5 billion mobile phones, and 6 billion connected devices.
The data we generate is estimated to grow tenfold from 2016 to 2015 to 163 zettabytes. We hear the term “big-data” in everyday conversation and in every channel. Most of us tend to agree that the big data trend is generally a good thing, but it is fundamentally changing how we communicate, how we interact with machines, how we socialize, how we work and ultimately empowering us to choose how much information we all share about ourselves with the public.
There are advantages and disadvantages to the widespread emergence of big data. That is not to say the disadvantages can’t be addressed, managed, and resolved through improved technology, as well as debate, regulation, and policy changes. As we become increasingly aware of the scope and volume of data being collected, analyzed, and used by organizations, we must collectively determine what is appropriate and ethical, and in some cases weigh the disadvantages (such as less privacy) in the interest of the greater good (such as public health and safety).
The rapid increase in the creation, availability, and processing speed of digital data has resulted in an exponential proliferation of data stored and analyzed.
The growth in volume is also being fueled by the increasing diversity of the data. In the early days (2000-2010), it was primarily numbers and documents. Today, we have data that is generated from the Internet, photos, videos, phones, bots, social media, sensors, and the exponentially growing field of IOT (Internet of Things) — connected devices generating and storing data 24 hours a day, 365 days a year. Within the next decade, the allocation of data being captured and stored within organizations (enterprise) is expected to grow by orders of magnitude — overall and relative to other personal devices. In other words, if you think there is a lot of data today, that number is set to grow significantly with each passing year.
The emergence of connected devices, and the number of interactions per connected person per day, is also set to explode in the next decade. Granted, a lot of these interactions will be through phone, entertainment devices, and our automobiles.
However, relatively new to the data scene, the emergence of connected devices in the healthcare industry will certainly drive a large share of the increase in data capture — and the march to 5,000 interactions per day. Given the exponential growth in the amount of data and the increasing diversity of data, it will become critical for big data to move into the “smart data” age. In other words, as we get better at collecting and storing massive amounts of data, we must also get better at using the right tools to derive insights.
In March 2018, the New York Times broke a story that consulting firm Cambridge Analytica used deeply personal data on more than 50 million Facebook users for political targeting. It was later revealed by Facebook that the impacted population was closer to 90 million. Although the debate started with a discussion about the use of personal data for political advertising and targeting, the conversation quickly evolved into a larger discussion about the proper use of data by social media, advertisers, and online firms in general.
This incident was somewhat unique in that it was made possible by using a third-party app to collect survey questions, profile data, and the data of users' Facebook friends. The data was then used to create psychographic profiles for subsequent targeting. We all know that online data was used — very successfully — in the the 2012 election. However, the 2016 presidential election, the debate centered around the fact that the data was very detailed, used in a way that was relatively new, used against policy, and generated questions about the “ethical” use of data for marketing purposes.
The rapid increase in the availability of data to help marketers provide targeted messaging in the health and pharmaceutical space has been generally viewed as a good thing. Customers get relevant advertising: brands get efficiency, and the publishers and platforms sit in the middle — charging a premium for highly targeted audiences. However, the use of Facebook data by third parties — and the debate over how Facebook let it happen — has resulted in a growing number of consumers asking how they can control their self-generated social media data. Given most social media and publishing platforms rely on personal and tracking data to support ad sales, this debate will not end anytime soon. During a recent interview, Sheryl Sandberg at Facebook indicated that consumers wishing to opt out of any type of data tracking may be considered as an audience for a “paid product” from Facebook. In other words, if you are not willing to relinquish your data rights — and disclose your activity online — platforms may argue you need to pay to use their technology. Those willing to share their data — driving targeting and advertising — are welcome to continue to use it as a “free” service.
In response to the Facebook/Cambridge Analytica scandal, some customers have joined the #deletefacebook movement and are going as far as deleting their history and activity archive from the platform. Ironically, the #deletefacebook hashtag was originally posted by the founder of WhatsApp, Brian Acton — a company acquired by Facebook for billions of dollars. Of course, Brian has been very vocal over the years about maintaining user privacy as a primary focus within WhatsApp.
Is the backlash against Facebook an isolated reaction, or will it represent a segment of consumers placing the importance of data privacy over targeting?
Time will tell. Some experts believe that only about 2 to 3% of all Facebook users are willing to take the step of deleting their account and activity. The stock market certainly did its best to judge the impact of the scandal — the stock (FB) initially lost about $50 billion in market value after the scandal was reported.
Of course, the Facebook story is only one part of the discussion about the future of privacy — and regulation — as we enter the age of age of stricter regulations in 2018..
The General Data Protection Regulation (GDPR) is Europe’s new framework for data protection laws, effective in May 2018. Any company that stores or processes personal information about EU citizens must comply with the GDPR. Companies that fail to comply could face steep fines.
The good news is that while GDPR is a cross-industry regulation, the health and pharmaceutical industry historically had some of the highest data privacy and security standards — often driven by ultra-conservative legal teams. In other words, given that most pharmaceutical companies were already hyper-sensitive about data privacy, GDPR will be a chance to review those policies and procedures. The critical part will be to ensure all data, marketing, and service partners are aware of GDPR — and compliant.
The GDPR regulations are designed to standardize data privacy and protection laws across Europe, but the impact will be felt globally as most organizations will act to maintain compliance. The regulations apply to any organization that handles EU data, without consideration for where the organization is based. In other words, unless you have a plan to guarantee you will never have data from any EU citizen in your data set — you will become compliant. The regulations change how data can be used, managed, stored, deleted, and released.
Additional information about the GDPR regulations can be found at www.eugdpr.org
As with most things in life, the online debate over access to technology and the desire for privacy is very nuanced — and at times confusing. A 2014 study by The Pew Research Center found only 9% of consumers believe they have “a lot of control” over the information and data collected from them. However, in the same study, 74% of consumers say it is “very important” to be in control of the information collected about them. In other words, we think data privacy is very important, but many of us also realize we really don’t have much control over what happens. The same study also found that 64% of consumers support more regulation of advertisers online.
Another study by the Pew Research Center in 2016 looked at the amount of confidence consumers have in various companies and organizations to keep their data private and protected. Perhaps a preview of things to come, only 9% of US adults were “very confident” that social media sites would protect their data. Another 38% were “somewhat confident” in social media platforms. US adults seem to place the most trust in their cell phone manufacturers — followed next by their credit card companies, then cell phone service providers. For those familiar with data practices, credit card and cell phone companies are very proficient aggregators and users of data — which is often used for subsequent targeting.
Is the current debate over privacy a temporary discussion — or part of a larger trend in which consumers demand more control over their data (and subsequent use)? Only time will tell.
That said, the implementation of GDPR in 2018 and the inevitable government hearings at the country and regional levels will provide ongoing fodder and fuel for consumers to determine their level of comfort with online platforms and how much they are willing to give up in return for access to social media, customized content, and highly targeted and relevant advertising.
Can artificial intelligence be creative? Can an algorithm create advertising from scratch — on par with a marketer with decades of experience? Not yet. That said, we are at the point where AI and machine learning algorithms can choose and serve up pre-existing content based on audience data in a way that allows the creative to focus on the pure strategy and copy — and the AI to vastly improve efficiency and effectiveness when it comes to getting the right content, ad, and experience to the right segment. A projection from analyst firm Gartner already estimates that 20% of commercial content today is being created (broadly defined) by machine learning.
While AI is not yet winning awards for the "best" ad, there are some examples that demonstrate it is successfully being used to create content overall: by curating news, creating click-worthy headlines, and producing video that would rival an expert with years of video- production experience.
Marketers across industries are using data analytics and AI to not only learn consumer preferences, but to predict — perhaps even before they realizes — what a consumer will be most interested in. In other words, best practices in many fields can be learned – and then replicated. The key is to provide the raw education to the AI platform, let it continue to learn, and then let it refine the approach over time.
One example of using AI to create content was a recent effort by Nvdia (a chip firm) related to music. Nvidia trained a computer to compose music “like” Star Wars composer John Williams and then had an orchestra play the result. The company showed the final piece at CES 2018 — a large technology conference in Las Vegas. The company worked closely with Disney to teach the neural network how to compose. As Nvidia put it, "Our ultimate purpose is to build computing platforms that allow you to do groundbreaking work."
Does that mean, over time, we don’t need a musical genius like John Williams? No — we still need experts to teach the machines and neural networks. However, over time, the benefit gained is from using the platforms to create “new” pieces with much greater efficiency. The same example can apply to marketing and advertising copy — eventually. Great creative minds will teach, sometimes unknowingly, neural networks what makes a great campaign or copy, letting the machine focus on creating — without the need for a nap or any rest at all.
Another trend that is being driven by AI, targeting, and hyper-personalization is “snackable” content. The term has been used to describe the phenomenon of short-form content (text and video) to engage a customer audience facing an increasing number of distractions in a constantly connected world. For the health and pharmaceutical marketer, this translates into creating content that can be consumed in chunks — over time or across platforms. It’s the debate of going from a 30 second (traditional television commercial) mentality … to a 15-second mentality … to a five-second mentality. It’s telling stories over time, over sites, but providing a coherent brand message to the customer at the same time. Clearly, pharmaceutical brands have unique regulatory challenges like fair balance that make the idea of truly dynamic content and five-second videos a real challenge.
It can be argued that while pharma may never reach the level of truly dynamic content that exists in some sectors such as entertainment — where brands may let consumers actually “create” ad content based on their browsing history, search, or purchasing activity — pharma does have an opportunity to invest in creating content that can be consumed in chunks, and then use AI and intelligent targeting to serve up that content in a personalized and compliant manner. AI has the potential to change how customers interact with information, technology, and services. It also has the potential to help marketers achieve the Holy Grail of relevance to the individual customer — at scale. AI will enable marketers to tailor approved campaigns to the customer in the moment based on intent.