As an educator, this type of discussion is of utmost importance to me. I intend to give it more thought. In terms of my personal impact, there are three levels to consider:
My role as the head of my school.
My personal use of AI, including its benefits and potential dangers.
The application of this technology in various contexts, such as business.
At first glance, I believe the article presents a somewhat overly optimistic and simplistic view of the role of AI, even in a business setting. Business encompasses more than bottom-line productivity; it also involves ethical considerations and the potential consequences of technology on its users. While AI can enhance productivity, it also poses the risk of diminishing the cognitive ability of those who use it excessively, and as a substitute for their own thinking and imaginations. While working, we spend the majority of our lives engaged in that work. How we spend that time not only affects the organization for whom we work but also affects us as human beings. Tools are not neutral – they do impact us for good or for ill.
CONFUSING CONTEXTS
In educational contexts, it is entirely appropriate to be suspicious about generative AI. School and college assessments exist for a specific purpose: to demonstrate that students have acquired the skills and the knowledge they are studying. Feeding a prompt into ChatGPT and then handing in the essay it generates undermines the reason for writing the essay in the first place.
When it comes to artistic outputs, like works of fiction or paintings, there are legitimate philosophical debates about whether AI-generated work can ever possess creative authenticity and artistic value. And there are tough questions about where the line might lie when it comes to using AI tools for assistance.
But issues like these are almost entirely irrelevant to business operations. In business, success is measured by results and results alone. Does your marketing copy persuade customers to buy? Yes or no? Does your report clarify complex issues for stakeholders? Does your presentation convince the board to approve your proposal? The only metrics that matter in these cases are accuracy, coherence, and effectiveness—not the content’s origin story.
When we import the principles that govern legitimate AI use in other areas into our discussion of its use in business, we undermine our ability to take full advantage of this powerful technology.
I believe we engineers said the same thing at the advent of the digital calculator in classrooms. I will attest to an inability of students sans a digital calculator to do such things as identify a sin versus a cos graph or do basic math. OTOH, I will also attest to the increased awareness of students that they must be able to use computer tools to do such things as compare graphical trends in output values from a function across a range of input parameters or model numerical outputs over a variety of input conditions. We (engineers) have lost something, yet we have gained something.
In the same mind, separate thinking and imagining as their own goals. Substitute thinking about or imagining what. Shift the focus. Yesterday (before AI), we demanded that students defend their ability to think about or imagine one concept at a time. Today (with AI), we should demand that students defend their ability to think about (critically compare) and imagine (extrapolate beyond) a host of concepts presented to them across all sides of a spectrum of ideas.
Allow AI to be a tool for students to collect and summarize information across larger data sets. Demand that they (the students) must defend what they know, can apply, can synthesize, and can create from that information with the same levels of integrity, precision, and accuracy as was required in the past.
The article you reference seems to make such arguments.
With all respect, I would not rely on Fast Company for AI policy advice. I would look for peer-reviewed advice from independent research organizations.
Believe me, they are not Fast Company source of advice. I do think they make a valid point – context matters in the use of a tool like AI. That was really my only point while acknowledging that the article was overly optimistic and simplistic.
I like the point they make about ownership. This is how I see for example Github Copilot in software development. But this already feels dated. Now the high priests of AI want us to use agents to autonomously do work. So who owns the work of these agents? They often make rather large changes to a code base which make the review of those changes the real bottle neck. The idea that hours of expensive engineering work could be replaced by a cheap agent must be tempting for any executive.
Our company is a bit unique in that - though we have many policies - we default to principles rather than policies. Here’s what we’ve put together re: AI -
1. Humans Are Always Accountable
AI never replaces our critical thinking or judgment. We use AI to accelerate our work: helping us think faster, communicate more clearly, and explore new ideas. The credit and consequences belong to humans, not the tool.
Strategy and priority decisions will be made by people.
Where a person has delegated any function to AI, that person remains fully responsible and accountable for the decisions made.
We apply wise judgment about what should - and should not - be automated. Automation may handle execution, but people always remain accountable for the outcome.
2. Build Trust, Not Confusion
We put others first by using AI in ways that make work easier and clearer, never harder or murkier.
When we can’t or decide not to review AI’s work, we make it clear what is AI-generated and unreviewed.
We will not use AI to manipulate emotion or distort the truth.
We use AI to produce clarity, not to add unnecessary detail. We will not substitute AI’s volume of words for our own understanding.
3. Steward Data with Care
We handle data entrusted to us responsibly and securely. Every interaction with AI reflects our duty to protect the information in our care.
We apply the same data-classification and security standards to AI tools as we do to any other vendor or system.
We share only what’s necessary, with awareness of how data may be stored, learned from, or reused by AI systems.
We take responsibility for ensuring that AI enhances data security, never compromises it.
I think I would have been much less weak in math if basic skills were taught before problem solving.
For instance, I can’t do much with logs, even with a calculator, unless I visualize how the procedure would look on a slide rule. I was, of course, never taught how to use a slide rule. Horrors, no. Amazing how illustrative they are, though.
I also got to college without being able to do basic trig. Then I discovered this magic thing called a unit circle and the demonic incantation, SOHCAHTOA.
Neither of which I’d ever heard of, despite going through algebra, geometry, and trig in high school math. My public school days were governed by experimental curricula.
Trig was a set of 40 or 50 identities one had to memorize. I couldn’t. I think my passing grade was an act of mercy, not education.
With AI, I might not have had to memorize anything - but I bet I would have had even less understanding of the meaning behind the operations.