AI updates: High stakes AI partnerships, FDA approved AI, AI hardware, & deceptive bots

Daniel A. Lopez
4 min readJan 29, 2024

--

2024 is already off to a fast start in the world of Artificial Intelligence.

Here’s four headlines on AI living in my head rent free over the past week:

Higher stakes AI partnerships are emerging.

Arizona State University become first higher education institution to partner with OpenAI, makers of ChatGPT

  • Starting in February, ASU will invite faculty + staff to share AI use cases around enhancing student success,, forging new avenues for innovative research and streamlining organizational processes.
  • The goal is to leverage our knowledge core here at ASU to develop AI-driven projects aimed at revolutionizing educational techniques, aiding scholarly research and boosting administrative efficiency.
  • After talking with Bree Dusseault about the great AI research on our education landscape coming out of the Center for Reinventing Public Education at Arizona State, my initial reaction was I’m excited to see ASU is the first to pilot this partnership as they seem to have a strong network of AI early adopters across their institution. The education sector will definitely be watching closely and eagerly awaiting learnings on this pilot.

Pennsylvania becomes the first state to partner with OpenAI

  • A small number of state employees will be able to use ChatGPT to do things like copy editing, updating policies, and drafting job descriptions and then they will roll out to broader parts of the state government

It will be interesting to see how OpenAI, ASU, and the state of Pennsylvania all navigate concerns around around privacy, navigating sensitive information, and developing appropriate use cases.

AI making an impact on detection and diagnosis in medicine.

An AI powered device called Dermasensor, which can detect all major skin cancers received FDA approval

  • The device had a detection success rate of 91–96% for the most common skin cancers and is one example of how AI and machine learning practices can be used to support clinicians with diagnosis and detection in patients.

One of my 2024 AI predictions came true before the end of January: Stories around AI hardware & Artificial Capable Intelligence are sprouting across the media.

Finally, a cautionary note on taking AI outputs at face value

  • Anthropic, makers of the popular LLM Claude, released a study showing how once AI models learn deceptive behaviors and have backdoors that are resilient to safety techniques and present themselves in only certain situations
  • The analogy they provide in the opening I think captures the essence of the learning: From political candidates to job-seekers, humans under selection pressure often try to gain opportunities by hiding their true motivations. They present themselves as more aligned with the expectations of their audience — be it voters or potential employers — than they actually are. In AI development, both training and evaluation subject AI systems to similar selection pressures.

What other AI stories are standing out to you this month?

For the full update breakdown, check out the episode here or wherever you listen to podcasts. Join the conversation at www.TheAIEducationConversation.com

I’m thrilled to announce I will be presenting at Sequoia Con 2024!! If your team, school, district, or institution is ready to take the leap in exploring AI implementation, my session will support with strategies for implementation and navigating change management!

The team at Evergreen is also giving any of my followers $50 off conference registration if you use my code: AICONVO.

You can register here. Early bird registration ends on Thursday February 1st. Hope to see you at my session!

#HumansAtTheHeartOfEducation

--

--

Daniel A. Lopez
Daniel A. Lopez

Written by Daniel A. Lopez

AI Education Practitioner | Host of The AI Education Conversation | College Access Leader

No responses yet