Notice: All forms on this website are temporarily down for maintenance. You will not be able to complete a form to request information or a resource. We apologize for any inconvenience and will reactivate the forms as soon as possible.

Tech Trends: AI Regulations, AI Drama, and Did I Mention AI?

Tech Trends November 2023

Each month, Plugged In will release a blog with the latest technology and social media trends. We’ll let you know what changes to keep an eye out for. We’ll offer some tips about how to handle technology in your family. And of course, we’ll give you the scoop on those things called “hashtags” so you can stay up to date on all the things your kids might be obsessed with.

And ICYMI (“in case you missed it,” for those not up on their social acronyms), you can check out October’s Tech Trends, too.

Artificial Intelligence Gets Regulated

On October 30, President Biden signed an Executive Order with new standards for AI safety and security. Here’s what that order stated, in a nutshell:

  • Developers must share their safety test results and other critical information about their AI systems with the U.S. government.
  • Standards, tools and tests must be developed to ensure that AI systems are safe, secure and trustworthy.
  • AI will not be used to engineer dangerous biological materials.
  • Systems that can detect AI-generated content and authenticate official content must be created to protect Americans from AI-enabled fraud and deception.
  • An advanced cybersecurity program will be created to develop AI tools that will find and fix vulnerabilities in critical software.
  • A National Security Memorandum will be created to further direct actions on AI and security.

Additionally, the order contains stipulations to protect American’s privacy, advance equity and civil rights, stand up for consumers, patients and students, support workers, promote innovation and competition, advance American leadership abroad, and ensure responsible and effective government use of AI.

Chances are that as AI continues to develop, more regulations will come down the pipeline. But considering how much is happening in the world of AI, I find it at least somewhat reassuring that actions are being taken.

OpenAI’s Drama

On Nov. 17, OpenAI (the parent company of ChatGPT) fired CEO Sam Altman because “he was not consistently candid in his communications with the board.” Sound confusing? Don’t worry, you’re not alone. OpenAI’s other execs were confused, too, says The Washington Post. And OpenAI’s president and co-founder Greg Brockman quit in solidarity with Altman. But the board still didn’t offer up any additional information in the days that followed.

Instead, according to ABC News, the board tapped Mira Murati, OpenAI’s chief technology officer, as interim CEO. But then she was replaced just two days later by Emmett Shear, former CEO of Twitch.

And while all of this was going down at OpenAI, Microsoft (a key investor in OpenAI) took advantage of the chaos by offering jobs to Altman, Brockman and any other OpenAI employees who left as a result of Altman’s dismissal. ABC News also reports that nearly all 800 employees at OpenAI signed a letter Monday morning threatening to quit and take up Microsoft CEO Satya Nadella’s offer if the board refused to meet their demands by resigning and restoring Altman as CEO.

Well, as of November 21, OpenAI has complied with those employee demands. A new board is being formed and Altman is set to return to the company. We still don’t know the specifics of what went down, but this is the only drama happening at the company …

Cheating With ChatGPT

Cheating has always been a cause for concern amongst college professors. It’s why tools such as TurnItIn (software that can detect if a paper was copied from a previous student’s work) exist. Unfortunately, the release of ChatGPT by Open AI last December put the already overwhelmed educators into “full-on crisis mode.”

According to Town&Country, 26% of K-12 teachers said they caught a student cheating using ChatGPT. An attorney reported that he’s seen four times as many academic misconduct cases since the program’s release. And a professor at Texas A&M even flunked a class of seniors out of fear that they had used the AI tool. (He later recanted, and the students were exonerated.)

Forbes reports that more than half of college students believe that using artificial intelligence is, in fact, cheating. However, only 41% actually believe it’s morally wrong. And only 27% believe AI tools should be prohibited in educational settings.

But perhaps what’s more unsettling is how many schools and administrators are expecting students to police themselves. Despite the alleged panic, only 31% of students are even aware of rules prohibiting the use of AI. And 54% reported that their instructors hadn’t discussed it in class.

That’s shocking. Especially since students at Harvard discovered that it only took ChatGPT five minutes to write an essay on a subject that originally took them six hours to research and write. Granted, Harvard has policies in place prohibiting the use of ChatGPT. But if other schools don’t have such strict policies, it opens the door for students to take some shortcuts.

Alyson Klein, writing for Education Week, has some recommendations for teachers dealing with this problem. But perhaps it would behoove parents to share this information with their own kids and teens to prevent a visit to the principal’s or dean’s office:

  1. Know the expectation. Teachers should already be addressing the use of AI, but if you aren’t sure, ask them, just to be safe.
  2. Attribute your sources. If you’ve been given permission to use AI tools such as ChatGPT, cite your work so there isn’t a question of what may or may not have been generated by a computer.
  3. Be honest about your workload. Sometimes, it can feel impossible to keep up with all your assignments. And when that happens, it can be easy to let AI complete that last bit of homework so you can actually get some much-needed sleep. (The CDC recommends 8-10 hours per night for teenagers and even more for younger children.) But rather than get flagged for cheating, it might be worth having a conversation with your teacher and/or parents (who can advocate on your behalf).
  4. Understand why learning to write on your own is important. Look, once upon a time, we were told that we should learn long division because we wouldn’t always have a calculator on hand. Then the smartphone came along. Well, now we’re experiencing the same thing with AI generators and writing. But AI’s abilities are far more ubiquitous than those you’ll find on a calculator—and as such, more dangerous to use as a crutch. When you let a machine do the talking for you, you lose your ability to reason, think critically and form persuasive arguments. And it’s important to have those skills in your brain—not just use those on your phone.
Emily Tsiao

Emily studied film and writing when she was in college. And when she isn’t being way too competitive while playing board games, she enjoys food, sleep, and geeking out with her husband indulging in their “nerdoms,” which is the collective fan cultures of everything they love, such as Star Wars, Star Trek, Stargate and Lord of the Rings.

5 Responses

  1. If you trust Jesus Christ as your lord and savior, you will have nothing to fear from artificial intelligence (AI).

  2. This is all good advice, but I think if you really want to discourage AI for cheating, you should write an article showing how frequently ChatGPT makes mistakes and how often it can disappoint you. The longer this article is, the better. Some students might be swayed by your statement that “When you let a machine do the talking for you, you lose your ability to reason, think critically and form persuasive arguments. And it’s important to have those skills in your brain—not just use those on your phone.”, but I think they will be outnumbered by students who think “That’s easy for them to say. They haven’t met my teachers or my parents. They’re asking me to do the impossible.” And they’ll either continue to use ChatGPT or they’ll give up completely thinking either “They’re impossible to please so I’m not wasting any more time trying” or “They don’t trust or appreciate or respect me so I just don’t care what they think about me anymore.”

    Also, I do not think AI should ever be connected to any weapons or explosives.

  3. These are good tips.
    A few things I’ve noticed about AI:
    There are now also websites to check if a story or essay was written by AI. However, when I machine translated one of my stories to Spanish and then checked to see if it flagged: it did not. Granted, Spanish is closer to English than most languages, so the sentences can correspond more closely.
    I also asked an AI to write a fanfiction with a specific plot as one I had written before and posted online. It wound up being eerily close to my own, even though I had only written a two-sentence prompt! The main difference was that it was much shorter and written in simpler sentences, but I couldn’t help but wonder if it had just reworded my fanfiction.

  4. If you trust in Jesus Christ as your lord and savior, you will have nothing to fear from artificial intelligence (AI).