I asked ChatGPT-4 to summarize the story from the link below. Here is the result:
The story discusses the growing prevalence and sophistication of deepfake technology, extending beyond just images and videos to include text as well. It highlights how personalized algorithms are categorizing users and feeding them tailored content, creating “media bubbles” where people only see information relevant to their specific community. This, in turn, amplifies the impact of deepfake content, shaping users’ perceptions and beliefs. The author, a chief technology officer at Silent Eight, predicts a future where distinguishing between real and fake content becomes nearly impossible, with potential serious implications for journalism, politics, and personal autonomy. The story underscores the shift from a once impartial internet to a platform where every user’s experience is uniquely tailored, and increasingly manipulated, by advanced algorithms and artificial intelligence.
My take? Deep-fake content will enhance and exacerbate confirmation bias, leading to further division. Moreover, because many people do not have a coherent worldview but instead possess a synthesized one, created from absorbing and adopting ideas and values without critical thought from the prevailing and fleeting trends of our time, they are particularly vulnerable to manipulation.
I agree! In part, this is the result of “doom scrolling” and the incessant propagation of negativity by the media to increase audience and advertising dollars. As they say, “if it bleeds it leads…” a dreadful way to do business. The word “crisis” must sell because everything has become a crisis.
I’m glad to hear you agree, but I’d point out that by sharing the post and opinion above you’re supporting and participating in the very thing you’re railing against. It is going to be increasingly important to recognize the effect of the 24-hour news cycle, and we should start teaching healthy relationships with our devices, social media, the news outlets, gaming, etc…
What I’m concerned about is robbing the next generation of hope by telling them over and over and over again that there’s nothing that can be done, and everything is falling apart. That’s not true now and never has been, no matter how many generations removed one is or how different the world is now from when they were young.
Hope for a brighter future, that we can engineer and build our way out of our many current crises is part of the reason I’m a fan of Apple. I’ve always felt like the best thing I get from Apple is a glimpse of the future, a piece of technology that we can have today that, just maybe, we can use to build a better world. I’m not naive enough to think Apple or any company can solve our problems, but we can draw inspiration from them. That’s what we need.
So, yea, generative AI is going to be an issue for real journalism, maybe we need new media outlets that commit to never using AI to write articles, that hire real journalists who will build trust in the publication.
We need education on healthy relationships with technology. We need clean energy tech like solar and flywheel batteries. We need better recycling and companies who stop using disposable plastics. There’s work to be done. Wallowing in despair will not help, and I refuse to participate.
If so, I apologize. I certainly don’t want to be the proverbial pot calling the kettle black.
That said, in perhaps in defense, I actually take a positive proactive approach to these things. In fact, the reason I post some of this is so that we can work against the negative and promote the positive. For example, I just completed our 2023-2026 Strategic Plan. One of the tactics, under a strategy, which is under a strategic goal is stated as follows:
Assess AI: What exactly is it? How are companies deploying it, e.g., integrated with MS Office and Google apps? What are the potential positive uses of this technology in our classrooms and offices? What are the downsides for learning, teaching, and academic integrity? How should we revise our instructional practices to accommodate the positives and ameliorate the negatives of AI, consistent with our educational philosophy and mission? What class or classes should we add to help our students develop technical skills in using AI? What policies do we need to put in place? What must we communicate to staff, parents, and students, and when? How do we effectively train our teachers to use AI in the classroom?
In addition to this particular tactic, I have had my entire senior leadership team, IT department, and my EA subscribed to ChatGPT-4 and I’ve given them a lot of research articles to read. The purpose is to figure out, first hand, how we can positively and effectively deploy this technology in our offices and classrooms while simultaneously seeking to limited the negatives.
Some are circling clockwise, others counter-clockwise.
… followed by …
I like the second sentence better. The first is a subjective and unprovable general observation about unquantifiable things (emotions). The second defines an action that will … by analogy … put some plugs in the drains.
The clockwise circling group calls the other evil. The counter-clockwise group calls the other deplorable. The limit here is not on what we can or cannot engineer to fix problems. The limit is when we will stop dancing around each other long enough to talk sincerely about the problems that must be fixed. At some point, while you may have all the knowledge, skills, and resources to engineer the perfect rocket to explode against an oncoming comet, you will have simply let the required time run out as you chatter and rant past each other.
As to deep-fake content … I trust that Apple and Microsoft (and Unix and …) developers are not purposely making apps that could lead folks astray because such apps will lead folks astray. Rather, as human beings constantly strive to engineer themselves into a better living, they, by nature, also create things that can engineer them into danger. Developers are asked to make ways for journalist to be able to use less and less effort to create more and more content that is true and real. The perversion is the creation of ways to create deep-fake content. You cannot have the former without the latter.
I appreciate the perspectives you’ve brought, especially the call to be less negative in testaments to future possibilities.
But sometimes it truly seems that the clock is ticking, and we are still just circling the drain … chanting voodoo curses at each other.
As a guest said in a recent MPU, we’ve evolved to pay much more attention to threats than to things that please us. It’s a survival trait. Being hopeful that there’s not something hiding around the corner ready to eat us is asking to be eaten. It’s better to worry enough to check.
Also, hope is the weapon so often abused by people who want us to trust them when we really shouldn’t.
Having said that, being grateful and paying attention to the good things that come our way is vital - appreciating people and experiences is how to build a positive mind-set, not denying reality.
News is a matter of trust. My optimistic take: after what you are mentioning, societies will begin noticing the deep faking tricks and dangers and start demanding proof of provenance for the news pieces they get exposed to. The deepfake trend cannot sustain itself as it requires more and more attention and clicks, so fake news will become more and more outrageous and polarizing until unavoidably people begin questioning what they are fed. What would happen if you saw a video of the President eating a cat alive with naked people around him? I would think “Look, this guy may be a moron but there’s no way he can do that!”
Beyond post news fact-checking, technology will enable the audiences to evaluate who is saying what in order to generate trust in a tamper-proof way (we have that, from blockchain to simpler stuff like SSL certificates on websites). And then, the old journalist signature itself will become an asset. Who said what. Who was there. Who is putting their professional prestige behind an assertion. I see this as an opportunity for a renaissance of journalism and media outlets.
Yes, political bias will always exist, but that will be the choice of the reader, not via some random tweet or tiktok video that an unsuspecting person gets exposed to via whatsapp.
It will take some time, though. In this vision things need to get much worse before they get any better.
I use claude.ai instead of ChatGPT. Here is how it summarized the same article. Interesting comparison.
The article by Michael Wilkowski, chief technology officer at Silent Eight, argues that deepfake technology enables more sophisticated manipulation of images, videos, and text. This could spread misinformation and erode trust in online content. Algorithms already customize newsfeeds to show only relevant content, creating filter bubbles or “media silos.” In the future, AI may generate personalized news summaries, making it hard to discern real vs. fake content. People might only get news from customized bot-created feeds, allowing politicians to target influential messages. While companies now combat deepfakes, they may eventually monetize approved deepfake bots. Wilkowski believes this will accelerate people’s inability to differentiate truth from opinion and fact. People are being trained to accept the limitations of deepfakes. In conclusion, deepfakes could seriously impact truth, trust, and communication as AI proliferates and erodes our ability to identify misinformation.
When I left school, the internet did exist, but there had not been many persons using it. And no schools. Years later, I visited my old school as part of an alumni event. The principal gave us a tour through the school. His school subjects and his academic background were computer science, math and physics. He was talking about the challenges of “integrating” the internet in teaching and what not. Back then there was a huge discussion especially among parents about the “bad content” of the internet. Those parents expected the school to 100% make sure that the pupils are safe at all times and what not. The principal criticized that the parent were missing the point to some degree. He said that the most important thing we need when we are using the internet is nothing different from consuming TV, newspapers, books and whatever else: media competence.
Deep-fake, AI and what not… This is still what it is all about: media competence.
Everybody has to be able to deal with whatever content: assess and evaluate content, filter and select content, verify content (source checking - is it legit and can it be verified). Do not only consume “information” but learn to be informed. If this is everybody has internalized, I am not afraid. The challenge is to get everybody to a point that he/she has developed skills and the competence to consume “media” in a healthy way. It is a huge task for schools and for the societies world-wide, but it is worth it and necessary trying to get there.
That is probably one of the first NPR articles I have ever, in general, agreed with. I say that as I type this message on my iPad which has cobalt in the battery. We, everyone, ones a piece of this disgrace.
Prior to Big Tech’s incursion into the internet, most content was written by individual users and groups. The last 20 years saw this part of the web be diminished (or at least get drowned out) by corporate stuff and spam. And most of what is on the internet is probably rubbish. (That’s true now even without AI - Google brought about the degradation of the internet by itself just through SEO, though ironically may itself not survive this latest shift in tech given the increasing redundancy of its primary product.)
This morning, I see this going one of two ways: we end up with a “two tier” internet where the majority of stuff is crap and there is a thriving counterculture “underneath” that of independent curators and content producers, or our societies actually move to regulate some of this stuff and limit the spread of rubbish on the internet. As the latter seems unlikely (for now), we’ll probably end up with the former. Being a recognisable human online will become more important, and those that do a good job of curating and sharing content will thrive. Spaces where humans can interact knowing they are with other humans (like this one!) will become more important.
(Also, as others have said, journalism has been in decline for years. HOWEVER, there are also more people than ever paying for newsletters, etc. Clearly we are quite happy to pay actual humans for writing/content we like, and journalism’s decline speaks more to their own problems than a decline in “the reporting of news” generally. Across the English-speaking realms at least, there are plenty of independent writers doing good journalism and being financially rewarded for it by their readers. I don’t see that stopping.)