Submitted by keshiuchi on
scared AI

When artificial intelligence is mentioned, it raises all kinds of conflicting emotions in people. The first reaction from many is one of fear—that AI will take away people’s jobs. But over time, we in the project management world have learned that AI actually makes certain tedious parts of our work easier. It frees up time for other parts of full project management, like leadership development.

Whether we are aware of it or not, we have all used some AI tools in our professional life. Now the new developments are now coming faster. In the last few years, I have been handling a lot of IT projects, and suddenly even found myself managing cybersecurity projects—even though I have absolutely no background in IT.

The big learning for me were:

  1. AI is not scary, but it can become dangerous in the hands of the wrong people.
  2. Most cybersecurity software has some form of machine learning these days, used for implementation and pilot phases of projects.
  3. The old adage for IT—“garbage in, garbage out”—still applies, meaning an AI tool is only as “smart” as the algorithms programmed and as the data fed into it.

This all got me wondering how long AI has really been around—and what it means for us at work.

History
This threw me off, as I never realized that AI has been around a lot longer than I thought. I am just listing a few key dates:

  • 1950-56: The term “artificial intelligence” was coined and became popular. And it started with a game: checkers. (To put his in perspective: The first personal computer, the Altair 8800 from MITC, was introduced in 1974.)
  • 1959: The term “machine learning” was coined by Arthur Samuel.
  • 1961: The first industrial robot, Unimate, started working at General Motors.
  • 1979: The American Association of Artificial Intelligence was formed

AI had its beginnings before many of us were even born. But going by the IBM website, AI started booming in the 1980s. Then in the 2010s, machine learning came into action with deep learning and big data, where machines were now imitating human brain functions.

In the 2020s, Generative AI (GenAI) came to the forefront. Now, learned content is being used to create original content. Newer tools like ChatGPT use reinforcement learning from human feedback (RLHF), meaning that when different people put in the same content, the tool picks up on use of language and tone to create unique content (so two AI-generated articles from different people may be very different).

Definitions
Currently, “artificial intelligence” has become one of the most misused terms, so we need to look into definitions. (You can buy “AI toothbrushes” these days…but is this really AI technology?)

Merriam-Webster defines it as “the capability of computer systems or algorithms to imitate intelligent human behaviour.”

IBM defines it as follows: “Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” A lot of the other tech websites have similar definitions.

I think the takeaway here is that AI, machine learning and GenAI are tools that can make our work much easier. If you tell the tool which content to use and what format to put it into, the tools can do a lot of the documentation for us in seconds. What we used to painstakingly prepare by hand and take hours to work over can now be prepared at the push of a button.

However, I strongly believe that none of these tools replaces the human mind—and you still need to go through your documents and double-check the content to ensure it is correct. And as these technological developments make our work easier in some areas, it has become more and more apparent that the real work for project managers is leading the teams and bringing out the best in each individual.

Do we need to fear AI?
If you are from my generation, you might have watched some of the (what I found to be) deeply traumatizing sci-fi movies where machines take over.

But I think we are safe. As much as AI takes over certain parts of our work (and some parts of job descriptions will disappear due to this), AI is not about to take over fully. It is trying to imitate the human brain, but it is not replacing it any time soon.

For example, I watched a video clip where a guy was having an online conversation with a robot he created. It demonstrated machine learning and its limitations quite clearly. At times, it was flabbergasting how much the robot picked up. It kept repeating what the man was saying, and then analyzing it following set patterns.

However, at the end it also showed that the pre-programmed information was not enough for full correct analysis. The robot had been taught to analyze the background behind the person in an online video call and to draw certain conclusions.

However, it analyzed the shelves (rectangle shapes) as meaning the person has many books and is well read. The shelves in the videos did not have any books on them, but only decorative items that were mostly in rectangle shapes.

In this case, this fault could easily be revised by the programmer through refining the items analyzed and the conclusions drawn. But I think this example clearly defines what limitations AI has if the data entered is not sufficient. At the same time, it also shows how—with lots of refinement of data analyze—the sky is the limit.

I strongly believe that with the right human corrections and input, AI can produce very useful documents—if the right corrections are done when needed. I think the lack of corrections—and publishing or sharing AI-generated documents without double-checking the content—will soon separate those who effectively use the tool from those who just see it as something that will do the work for them.

One of the major benefits of GenAI-based tools will be how they will change decision-making. The fact that they analyze massive amounts of data in very little time permits organizations to use data-driven analytics and insights to make decisions faster and more efficiently.

It will even affect our leadership styles if we learn to use GenAI to enhance or confirm our gut instincts. As Lomit Patel points out in his LinkedIn article  “Human intuition and AI are two complementary sources of knowledge that can enhance decision-making.”

There is a downside…
Over time, it will become much harder to differentiate between truth and AI-generated falsehoods, since the machine learning is progressing. There is a real danger of being fed falsehoods based on poorly defined machine learning and algorithms.

Of course there is also a danger of this being deliberately abused. However, just like we learned to use our common sense with things we find on the internet to determine what is true and what is not, we need to continue using it for GenAI-generated content.

The easy availability of GenAI also opens up a whole new level of ethical considerations, and will in due course bring up many new legal considerations as well. Data privacy will be at the forefront of these considerations.

Things like unintended racism and discrimination can become major issues. For example, if during the learning phase the GenAI tool is mainly fed info from one race, be it pictures or even language, then the outputs will have a very strong leaning toward that race and may not recognize data or pictures from other races.

As leaders we need to be at the forefront of taking holistic and proactive approaches involving multiple dimensions to ensure these ethical issues do not occur, or are identified as early as possible. And we must also ensure that any data fed into a GenAI tool is as holistic as possible.

What does this all mean for us?
Yes, certain jobs like content writing will change considerably, but humans will not be replaced any time soon. I do believe online article writing jobs may disappear, but then that may not be bad anyway and loads of new opportunities will come up. Companies may need machine learning contributors who feed AI tools with content. (It’s important to realize that more and more of the things you see online will be AI created.)

We need to learn to use these new tools effectively to reap the benefits. This starts by feeding the tool with the correct information in the right format—and ends with us counterchecking the output before we share it with anyone.

At the same time, we need to take artefacts given to us with a grain of salt. Use your common sense and your gut feeling. Keep learning how to use the new tools that are coming up.

My lesson here: Don’t be scared of AI. Learn to use it instead.

Tags