GatherContent is becoming Content Workflow by Bynder. Read More

The digital content trap (book excerpt)

The digital content trap (book excerpt)

14 minute read

The digital content trap (book excerpt)

14 minute read

The digital content trap (book excerpt)

Gerry McGovern

Writer. Speaker. Developer of Top Tasks Framework.
There is a growing awareness of humanity’s impact on the environment and the need to change behaviour. However, rarely does digital get put under the microscope to see how it has contributed to climate change and other negative changes that affect our planet. Digital is physical. Digital cannot exist without energy. Everything that happens in digital has a physical energy cost. Digital is an accelerant of many of the worst human behaviours of extreme convenience and unprecedented waste generation. In his latest book, World Wide Waste, Gerry McGovern sets out detailed actions, cultural and other changes. None of these involve giving up digital but are about using it more wisely to be a friend—rather than an enemy—of our planet. What follows is an extract from the book.

Table of contents

1.
2.
3.
4.
5.
6.
7.
8.

Too much content

Zettabyte Armageddon

Up to 90% of digital data is not used. We collect. We store. We create and then don’t use. Data is the atomic structure of digital. Words, music, images, films, videos, software. It all ends up as data. Most data is like single use, throwaway plastic. What sort of society accepts 90% waste? 

  • Around 90% of data is never accessed three months after it is first stored, according to Tech Target. 
  • 80% of all digital data is never accessed or used again after it is stored, according to a 2018 report by Active Archive Alliance.
  • Businesses typically only analyze around 10% of the data they collect, according to search technology specialist Lucidworks. 
  • 90% of unstructured data is never analyzed, according to IDC.
  • 90% of all sensor data collected from Internet of Things devices is never used, according to IBM.

Much of this is machine-generated data. However, lots of it is generated by humans, by professional content writers. In 1994, there were 3,000 websites. Today, there are 1.7 billion, almost one website for every three people on the planet. In 2018, 33 zettabytes of data were created. If we printed out one zettabyte as data as books, we could give every one of the 7.7 billion people on this planet 129,870 of these books. 

By 2025, it’s estimated that there will be 175 zettabytes, and that by 2035, there will be more than 2,000 zettabytes, according to Statista. Do you know how many trees we need to plant to deal with the CO2 pollution created by 2,000 zettabytes of data? Eight hundred forty billion. We are on the verge of a data apocalypse. 90% of this data is crap. It’s useless waste. Digital is in the process of destroying the planet for no reason other the sheer laziness and lack of care of the digital industry. We are all—particularly content producers—part of this problem. We can all be part of the solution.  

Content is like sugar. We can’t get enough of it. It’s an addiction. The consequences are everywhere.

If we look at the Web as a living system, we could say that it is obese. While the tools, systems and processes for publishing and storing content are abundant and cheap, the tools and processes for review and removal of what is out-of-date are anaemic and weak.

That’s because we are addicted to publishing and we hate cleaning up after ourselves. If the Web was a digestive system, it would have no capacity to poop. 

“The web made lots of content available for “free,”” medical journal editor, Susanna Guzman explains. “Whether it was good or reliable or not was of secondary concern, it seemed. As an editor working on staff of a medical journal, I did everything I could to ensure that the journal's brand for being evidence-based and transparent was upheld. That publication, and others that stayed true to their brands and didn't sell out, are now benefiting from that decision. If they're still in business, that is. Many are not, having not been able to compete with free.”

Digital bloat is everywhere. Going on the Web to find accurate information is an increasing challenge. Quality control is often very poor when it comes to web content. Once it’s up there, it’s hardly ever reviewed. If new information emerges that makes what is already published out-of-date, misleading or wrong, there are rarely proper procedures in place to update, and where necessary remove, the out-of-date content. 

That’s surely not the case with health information, you say. Think again. I’ve worked with numerous health organizations over the years and their management of web content was patchy at best. At one stage, the US Department of Health had 200,000 pages on its website. It finally got around to reviewing what they had and deleted 150,000 of them. Nobody noticed. Not a single enquiry for one of those 150,000 pages. Why were they there? What purpose were these pages serving? 

I have known other health organizations that hadn’t reviewed their web-based health information in five years or more. Here’s the scary thing. In 25 years of working with organizations in 40 countries, I have found that in the majority of cases nobody is responsible for reviewing and removing out-of-date content. I have often pointed to out-of-date content. I would check again in another six months and the very same out-of-date content would still be there. When I would stress the need to keep the content up to date, the digital team would explain that management simply didn’t care and would not provide enough resources for review. It was all publish, publish, publish. 

In World Wide Waste, I decided to focus on a very specific area of health in order to get a sense of how the quality of the content was being managed. The area I chose was the link / or otherwise between cancer and sugar.  

“There’s a lot of confusing and misleading information on the Internet about the relationship between sugar and cancer,” an article in Memorial Sloan Kettering stated in December 2016. “The notion that refined sugar causes cancer or that cutting sugar from the diet is a good way to treat cancer are two common — and incorrect — claims that turn up in a Google search.” So, according to Sloan Kettering, it is incorrect to state that there is a connection between sugar and cancer. I decided to get in touch with Sloan Kettering about this blog post because I was finding lots of information on the Web indicating that there is a link between cancer and sugar. One such study, published in 2019, was even carried out by Sloan Kettering researchers. 

Sloan Kettering were kind enough to reply, which was unusual because I got in touch with lots of organizations while doing this research very few replied. This is not surprising. In my experience, most organizations are either unwilling or unable to respond to feedback. Once something gets published, it’s finished. It might as well be set in stone. Reviewing it, responding to feedback on it, let alone taking it down, would be a truly exceptional activity. We must change that. We must make review and removal as important as publishing. This is how we will get control of the publishing flood and ensure quality, accurate information.

Sloan Kettering responded that what the blog post was trying to communicate was that there was not enough evidence of a direct link (causation) between sugar and cancer as and when it was written. However, they stated that they were “actively studying how sugar relates to cancer risk, treatment, and outcomes, so we certainly don’t consider the possible link between sugar and cancer a “myth”.” What I concluded from my correspondence with Sloan Kettering is that while the link is not fully proven and more research needs to be done, there is enough evidence of a connection between sugar and cancer for further research. 

Yet, as I looked at lots and lots of other sources from reputable organizations, I kept coming across the word “myth” in connection with sugar and cancer. A myth is a false belief or idea. A myth is the original fake news. Even in an age of zettabytes, words matter. When you say something is a myth you are using very strong, definitive language. In medicine, myths are associated with fraud, fallacies, fads, quacks. But is it really a myth that there is no link between sugar and breast cancer? I found lots of studies on the Web that indicated that there was indeed a link. However, for every study I found showing a link, I found other content claiming it was a myth. Who to believe?

Content is loaded with history, and carries with it all sorts of bias. Data is political. There is always a story behind why certain data exists and why other data doesn’t. Data is always imperfect because it reflects our imperfect society. Always be skeptical about data.

“Can sugar cause cancer? It seems that evidence pointing this way was discovered in a study funded by the sugar industry nearly 50 years ago — but the work was never published,” an article in Medical News Today stated in November 2017. In 2016, The New York Times reported that new evidence had emerged that in 1967 Big Sugar had paid three Harvard scientists to publish a review of research on sugar, fat and heart disease. The studies used in the review were handpicked by Big Sugar, and, of course, told a sweet story. 

One of the scientists was Dr. Frederick Stare, then head of Harvard University’s department of nutrition. Dr Stare claimed that people got as much or more food value from processed foods as they did from natural food, which he called a “food fad.” He advised people to eat “additives—they’re good for you”. He thought that Coca-Cola was a wonderful “healthy between-meals snack.” He loved sugar, calling it “a quick energy boon and pleasant to take.” You guessed it. Big Sugar was pumping donations into his department. 

Another scientist who wrote the Big Sugar article was Mark Hegsted, who would go on to draft the forerunner to the US federal government’s dietary guidelines. In 1980, the US issued its first set of dietary guidelines. 15% of US citizens were obese when these guidelines were published. By 2016, it was 40%. Correlation is not causation, though it is a bit strange that after the first dietary guidelines were published, US obesity bulged. The committee writing the 2020 US nutrition guidelines were well sprinkled with former lobbyists and those who have been funded by Big Sugar and Big Food.

Bias in data is rampant. What are we as content professionals going to do about it? Assuming that the organizations we work for want to address that bias, how do we ensure quality and accuracy when it comes to the content we publish? More importantly, how do we ensure that the content that has already been published does not have bias and inaccuracies?

More content, less light

One of the reasons organizations may give for claiming something is a myth when in fact the reality is more complex and subtle, is that they need to make things simple, to ‘dumb down’ for the general public. There is some merit to this argument. Another reason is that many organizations have still not adapted to how the Web has changed communications from a one-way, controlled channel, to a messy mix of diverse publishing and feedback. Historically, such organizations were often the only source of information on the subject within a country. 

There is also a culture in traditional publishing that believes that that which is published is to be revered and that what has gone before sets the scene for the future. If your organization has been calling something a myth for decades, it’s hard to change the language, the tone. As well, organizations everywhere are notoriously bad at being able to review, update and where appropriate remove or archive.

Review and maintenance are so essential when it comes to data. We must become much better at taking care of what we have rather than constantly creating new stuff. 

A counter argument is that we should allow everything to be published. That the flood of new data will ultimately drive science and society forward. If we look back in history to other revolutions in communication, we find that was not always the case. With information comes misinformation. We see how in the United States and Great Britain, for example, Facebook et al are accelerating the development of mis-information societies. 

“There is no evidence that, except in religion, printing hastened the spread of new ideas… In fact, the printing of medieval scientific texts may have delayed the acceptance of Copernicus,” Elizabeth Eisenstein wrote in her book, The Printing Revolution in Early Modern Europe. 

We must manage our data much better. We must establish processes to root out as much as possible of what is wrong, what has been deliberately manipulated, what is prejudicial, what is fake. Otherwise, the digital world—whose building blocks are data—will become a world of crap and lies.

Back around 2011, The Norwegian Cancer Society had a 5,000-page website, with 45 part-time publishers. The Society carried out a Top Tasks survey, which is a research method I developed to help understand what really matters to people. The results showed that a very small set of tasks, centering around treatment, symptoms and diagnosis, were vastly more important to people than a whole range of other content. A comprehensive review of the content on the website began. Lots of duplicate content was discovered. This was mainly because the Society worked in departmental silos, each silo creating their own content, unaware that similar content existed in another silo. 

It was found that having 45 part-time publishers was unmanageable and ineffective. Because publishing content was a small part of these people’s jobs, they could never find time to properly review, to collaborate with others who were publishing, or to do training and improve their skills. It was decided that the team should be reduced to six people who would be able to dedicate a substantial portion of their time to content. These people would also actively collaborate with each other. 

The result was that the site was reduced from 5,000 to 500 pages that were consistently reviewed and managed. The Society is a charity and needs to get donations from the public. On the old site, there were calls for donations everywhere. However, the Top Tasks results clearly showed that donating to the Society was in no way a top task. A brave decision was made to focus on the citizens top tasks of treatment, symptoms and prevention, and to remove lots of content connected with donations. The results? Nurses reported that when interacting with people who had been to the website, they were clearly better informed. And donations? Donations doubled.  

During the Ebola Crisis, the Ebola factsheet page on the World Health Organization (WHO) website was a vital resource for doctors, nurses and other interested parties. Yet it was a real challenge to get this page reviewed and updated. The reason was that the WHO was so focused on publishing new information about Ebola that it struggled to review and update essential content that was already published. It seemed that everybody within WHO wanted to publish something on Ebola, to show what they or their division was doing to combat the disease. WHO knew how important the factsheet page was, how it was infinitely more important than the vast majority of other pages on Ebola, but it too was paralyzed by a tsunami of internal publishing. 

The Web is great. All the challenges data faces can be overcome. Let me tell you a story about a fellow named Tom. It was 1993 and Tom was living in Washington DC. Tom had a serious hip issue. He was aware of research about a novel approach to hip surgery and he was constantly asking his doctor to get him a copy of the research. After months and months of trying to get his doctor to give him the research, Tom got frustrated and went to a medical library. There, he was arrested for attempting to get the research. Arrested. 

A Web full of data, for all its drawbacks, is infinitely better than a world where some high priests guard the knowledge. We must learn to navigate the data, to filter and interpret it. If we create data and information, we must take responsibility for it, from the day it’s published until the day its removed—should that day need to arrive. We must review what we publish with a regularity that reflects its likelihood to go out-of-date.  

World Wide Waste: How digital is killing our planet and what to do about it

Too much content

Zettabyte Armageddon

Up to 90% of digital data is not used. We collect. We store. We create and then don’t use. Data is the atomic structure of digital. Words, music, images, films, videos, software. It all ends up as data. Most data is like single use, throwaway plastic. What sort of society accepts 90% waste? 

  • Around 90% of data is never accessed three months after it is first stored, according to Tech Target. 
  • 80% of all digital data is never accessed or used again after it is stored, according to a 2018 report by Active Archive Alliance.
  • Businesses typically only analyze around 10% of the data they collect, according to search technology specialist Lucidworks. 
  • 90% of unstructured data is never analyzed, according to IDC.
  • 90% of all sensor data collected from Internet of Things devices is never used, according to IBM.

Much of this is machine-generated data. However, lots of it is generated by humans, by professional content writers. In 1994, there were 3,000 websites. Today, there are 1.7 billion, almost one website for every three people on the planet. In 2018, 33 zettabytes of data were created. If we printed out one zettabyte as data as books, we could give every one of the 7.7 billion people on this planet 129,870 of these books. 

By 2025, it’s estimated that there will be 175 zettabytes, and that by 2035, there will be more than 2,000 zettabytes, according to Statista. Do you know how many trees we need to plant to deal with the CO2 pollution created by 2,000 zettabytes of data? Eight hundred forty billion. We are on the verge of a data apocalypse. 90% of this data is crap. It’s useless waste. Digital is in the process of destroying the planet for no reason other the sheer laziness and lack of care of the digital industry. We are all—particularly content producers—part of this problem. We can all be part of the solution.  

Content is like sugar. We can’t get enough of it. It’s an addiction. The consequences are everywhere.

If we look at the Web as a living system, we could say that it is obese. While the tools, systems and processes for publishing and storing content are abundant and cheap, the tools and processes for review and removal of what is out-of-date are anaemic and weak.

That’s because we are addicted to publishing and we hate cleaning up after ourselves. If the Web was a digestive system, it would have no capacity to poop. 

“The web made lots of content available for “free,”” medical journal editor, Susanna Guzman explains. “Whether it was good or reliable or not was of secondary concern, it seemed. As an editor working on staff of a medical journal, I did everything I could to ensure that the journal's brand for being evidence-based and transparent was upheld. That publication, and others that stayed true to their brands and didn't sell out, are now benefiting from that decision. If they're still in business, that is. Many are not, having not been able to compete with free.”

Digital bloat is everywhere. Going on the Web to find accurate information is an increasing challenge. Quality control is often very poor when it comes to web content. Once it’s up there, it’s hardly ever reviewed. If new information emerges that makes what is already published out-of-date, misleading or wrong, there are rarely proper procedures in place to update, and where necessary remove, the out-of-date content. 

That’s surely not the case with health information, you say. Think again. I’ve worked with numerous health organizations over the years and their management of web content was patchy at best. At one stage, the US Department of Health had 200,000 pages on its website. It finally got around to reviewing what they had and deleted 150,000 of them. Nobody noticed. Not a single enquiry for one of those 150,000 pages. Why were they there? What purpose were these pages serving? 

I have known other health organizations that hadn’t reviewed their web-based health information in five years or more. Here’s the scary thing. In 25 years of working with organizations in 40 countries, I have found that in the majority of cases nobody is responsible for reviewing and removing out-of-date content. I have often pointed to out-of-date content. I would check again in another six months and the very same out-of-date content would still be there. When I would stress the need to keep the content up to date, the digital team would explain that management simply didn’t care and would not provide enough resources for review. It was all publish, publish, publish. 

In World Wide Waste, I decided to focus on a very specific area of health in order to get a sense of how the quality of the content was being managed. The area I chose was the link / or otherwise between cancer and sugar.  

“There’s a lot of confusing and misleading information on the Internet about the relationship between sugar and cancer,” an article in Memorial Sloan Kettering stated in December 2016. “The notion that refined sugar causes cancer or that cutting sugar from the diet is a good way to treat cancer are two common — and incorrect — claims that turn up in a Google search.” So, according to Sloan Kettering, it is incorrect to state that there is a connection between sugar and cancer. I decided to get in touch with Sloan Kettering about this blog post because I was finding lots of information on the Web indicating that there is a link between cancer and sugar. One such study, published in 2019, was even carried out by Sloan Kettering researchers. 

Sloan Kettering were kind enough to reply, which was unusual because I got in touch with lots of organizations while doing this research very few replied. This is not surprising. In my experience, most organizations are either unwilling or unable to respond to feedback. Once something gets published, it’s finished. It might as well be set in stone. Reviewing it, responding to feedback on it, let alone taking it down, would be a truly exceptional activity. We must change that. We must make review and removal as important as publishing. This is how we will get control of the publishing flood and ensure quality, accurate information.

Sloan Kettering responded that what the blog post was trying to communicate was that there was not enough evidence of a direct link (causation) between sugar and cancer as and when it was written. However, they stated that they were “actively studying how sugar relates to cancer risk, treatment, and outcomes, so we certainly don’t consider the possible link between sugar and cancer a “myth”.” What I concluded from my correspondence with Sloan Kettering is that while the link is not fully proven and more research needs to be done, there is enough evidence of a connection between sugar and cancer for further research. 

Yet, as I looked at lots and lots of other sources from reputable organizations, I kept coming across the word “myth” in connection with sugar and cancer. A myth is a false belief or idea. A myth is the original fake news. Even in an age of zettabytes, words matter. When you say something is a myth you are using very strong, definitive language. In medicine, myths are associated with fraud, fallacies, fads, quacks. But is it really a myth that there is no link between sugar and breast cancer? I found lots of studies on the Web that indicated that there was indeed a link. However, for every study I found showing a link, I found other content claiming it was a myth. Who to believe?

Content is loaded with history, and carries with it all sorts of bias. Data is political. There is always a story behind why certain data exists and why other data doesn’t. Data is always imperfect because it reflects our imperfect society. Always be skeptical about data.

“Can sugar cause cancer? It seems that evidence pointing this way was discovered in a study funded by the sugar industry nearly 50 years ago — but the work was never published,” an article in Medical News Today stated in November 2017. In 2016, The New York Times reported that new evidence had emerged that in 1967 Big Sugar had paid three Harvard scientists to publish a review of research on sugar, fat and heart disease. The studies used in the review were handpicked by Big Sugar, and, of course, told a sweet story. 

One of the scientists was Dr. Frederick Stare, then head of Harvard University’s department of nutrition. Dr Stare claimed that people got as much or more food value from processed foods as they did from natural food, which he called a “food fad.” He advised people to eat “additives—they’re good for you”. He thought that Coca-Cola was a wonderful “healthy between-meals snack.” He loved sugar, calling it “a quick energy boon and pleasant to take.” You guessed it. Big Sugar was pumping donations into his department. 

Another scientist who wrote the Big Sugar article was Mark Hegsted, who would go on to draft the forerunner to the US federal government’s dietary guidelines. In 1980, the US issued its first set of dietary guidelines. 15% of US citizens were obese when these guidelines were published. By 2016, it was 40%. Correlation is not causation, though it is a bit strange that after the first dietary guidelines were published, US obesity bulged. The committee writing the 2020 US nutrition guidelines were well sprinkled with former lobbyists and those who have been funded by Big Sugar and Big Food.

Bias in data is rampant. What are we as content professionals going to do about it? Assuming that the organizations we work for want to address that bias, how do we ensure quality and accuracy when it comes to the content we publish? More importantly, how do we ensure that the content that has already been published does not have bias and inaccuracies?

More content, less light

One of the reasons organizations may give for claiming something is a myth when in fact the reality is more complex and subtle, is that they need to make things simple, to ‘dumb down’ for the general public. There is some merit to this argument. Another reason is that many organizations have still not adapted to how the Web has changed communications from a one-way, controlled channel, to a messy mix of diverse publishing and feedback. Historically, such organizations were often the only source of information on the subject within a country. 

There is also a culture in traditional publishing that believes that that which is published is to be revered and that what has gone before sets the scene for the future. If your organization has been calling something a myth for decades, it’s hard to change the language, the tone. As well, organizations everywhere are notoriously bad at being able to review, update and where appropriate remove or archive.

Review and maintenance are so essential when it comes to data. We must become much better at taking care of what we have rather than constantly creating new stuff. 

A counter argument is that we should allow everything to be published. That the flood of new data will ultimately drive science and society forward. If we look back in history to other revolutions in communication, we find that was not always the case. With information comes misinformation. We see how in the United States and Great Britain, for example, Facebook et al are accelerating the development of mis-information societies. 

“There is no evidence that, except in religion, printing hastened the spread of new ideas… In fact, the printing of medieval scientific texts may have delayed the acceptance of Copernicus,” Elizabeth Eisenstein wrote in her book, The Printing Revolution in Early Modern Europe. 

We must manage our data much better. We must establish processes to root out as much as possible of what is wrong, what has been deliberately manipulated, what is prejudicial, what is fake. Otherwise, the digital world—whose building blocks are data—will become a world of crap and lies.

Back around 2011, The Norwegian Cancer Society had a 5,000-page website, with 45 part-time publishers. The Society carried out a Top Tasks survey, which is a research method I developed to help understand what really matters to people. The results showed that a very small set of tasks, centering around treatment, symptoms and diagnosis, were vastly more important to people than a whole range of other content. A comprehensive review of the content on the website began. Lots of duplicate content was discovered. This was mainly because the Society worked in departmental silos, each silo creating their own content, unaware that similar content existed in another silo. 

It was found that having 45 part-time publishers was unmanageable and ineffective. Because publishing content was a small part of these people’s jobs, they could never find time to properly review, to collaborate with others who were publishing, or to do training and improve their skills. It was decided that the team should be reduced to six people who would be able to dedicate a substantial portion of their time to content. These people would also actively collaborate with each other. 

The result was that the site was reduced from 5,000 to 500 pages that were consistently reviewed and managed. The Society is a charity and needs to get donations from the public. On the old site, there were calls for donations everywhere. However, the Top Tasks results clearly showed that donating to the Society was in no way a top task. A brave decision was made to focus on the citizens top tasks of treatment, symptoms and prevention, and to remove lots of content connected with donations. The results? Nurses reported that when interacting with people who had been to the website, they were clearly better informed. And donations? Donations doubled.  

During the Ebola Crisis, the Ebola factsheet page on the World Health Organization (WHO) website was a vital resource for doctors, nurses and other interested parties. Yet it was a real challenge to get this page reviewed and updated. The reason was that the WHO was so focused on publishing new information about Ebola that it struggled to review and update essential content that was already published. It seemed that everybody within WHO wanted to publish something on Ebola, to show what they or their division was doing to combat the disease. WHO knew how important the factsheet page was, how it was infinitely more important than the vast majority of other pages on Ebola, but it too was paralyzed by a tsunami of internal publishing. 

The Web is great. All the challenges data faces can be overcome. Let me tell you a story about a fellow named Tom. It was 1993 and Tom was living in Washington DC. Tom had a serious hip issue. He was aware of research about a novel approach to hip surgery and he was constantly asking his doctor to get him a copy of the research. After months and months of trying to get his doctor to give him the research, Tom got frustrated and went to a medical library. There, he was arrested for attempting to get the research. Arrested. 

A Web full of data, for all its drawbacks, is infinitely better than a world where some high priests guard the knowledge. We must learn to navigate the data, to filter and interpret it. If we create data and information, we must take responsibility for it, from the day it’s published until the day its removed—should that day need to arrive. We must review what we publish with a regularity that reflects its likelihood to go out-of-date.  

World Wide Waste: How digital is killing our planet and what to do about it


Ready to get started?
Start your free trial now
Start free trialBook a demo
No items found.

About the author

Gerry McGovern

Gerry McGovern has developed Top Tasks, a research method to understand what truly matters to people. Top Tasks has been used by organisations such as Microsoft, Cisco, NetApp, Toyota, World Health Organisation, IBM, the European Union, US, UK, Dutch, Canadian, Norwegian, and Irish governments.

Top Tasks helps organisations improve customer experience through identifying and optimising customer top tasks. It has developed over 15 years and has been used more than 400 times, with an estimated 300,000 customers participating.

A highly regarded speaker, Gerry has spoken on digital customer experience in around 40 countries. He has written eight books. His latest is called World Wide Waste: How digital is killing the planet and what to do about it.

The Irish Times has described Gerry as one of five visionaries who have had a major impact on the development of the Web. He has appeared on BBC, CNN and CNBC television, partaken in various radio shows, and featured in numerous print media publications. He is the founder and CEO of Customer Carewords.

Related posts you might like