Skip to main content

Verified by Psychology Today

Artificial Intelligence

3 Tactics for Outsmarting Artificial Intelligence

Artificial intelligence lacks sentience and at times is pragmatically useless.

Key points

  • Artificial intelligence (AI) is erroneously feared as part of a paradigm-shifting technology scare.
  • AI has no capacity for feelings and thus cannot replicate human intuition or emotions.
  • Leaders and educators can design assessments that reduce the use of AI, resulting in fewer concerns about academic or professional honesty.
Source: 0fjd125gk87/Pixabay
Source: 0fjd125gk87/Pixabay

Are you excited or agitated by the non-stop artificial intelligence (AI) hype? If you are a student, you may be thinking that AI can be a time saver or an opportunity to gain a competitive advantage. If you're an instructor, the advent of AI means that monitoring academic honesty has become exponentially more challenging, or at least more laborious. If you are a content creator or publisher, you are faced with dilemmas concerning originality, authorship, and plagiarism. Across situations, AI users are vulnerable to ethical, moral, and statutory challenges concerning what is or is not a proper use of the emergent technology.

Artificial Intelligence refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation (Chat GPT, 2023a). Like what happens with the introduction of other new and unfamiliar technologies, AI has instilled angst and anxiety in many individuals. People are concerned about everything from having their jobs replaced by AI to the eventual demise of humanity as a result of AI proliferation. Keep in mind the same concerns were raised by the advent of the printing press (around 1436), the radio, and Facebook (now called Meta); however, none of these dire predictions have come true (Bell, 2010). Like its predecessor technology, AI is a tool designed to assist, not replace, human reasoning and productivity.

However, the perceived benefits of AI are limited for at least three reasons. According to the AI system "Chat GPT" (2023b), AI lacks common sense and contextual understanding, has limited creativity and no intuition, and has minimal ability to understand and interpret human emotions and behaviors. In other words, machine learning does a poor job of replicating human thinking and reasoning because a machine lacks sentience. Lacking the ability to use and express emotion is a distinct disadvantage in comparison to humans. Robust empirical evidence reveals that across different types of careers and cultures, individuals who are high in emotional intelligence are far more effective than their strictly logical or unempathetic peers (O'Boyle et al., 2011).

Although AI can be used to increase productivity and enhance knowledge gains, diligence is necessary to counteract the liabilities described earlier. Reliance on these established leadership and evaluation tactics can essentially eliminate the pitfalls of AI.

In the workplace

From a leadership perspective, deemphasize deadlines and instead focus on innovation. When under pressure, individuals often seek quick solutions and resort to methods that deliver immediate gratification. Going forward, expect obligatory AI queries to become commonplace.

Instead of striving for short-sighted, immediate solutions, allow for problem-solving time that includes collaboration and group brainstorming to explore unorthodox methods of solving business problems. Empower team members to be innovative through the tolerance of ambiguity and the rewarding of mistakes that ultimately will contribute to organizational growth (Iberra et al., 2018). Use AI only in circumstances where evidence-based knowledge is necessary or when optimizing standardized routines for repetitive business processes.

Minimizing student use of AI

Instructors can modify assessment types to minimize the use of AI. From an instructional perspective, in addition to using AI detection software, focus on assessments that evaluate creativity or apply knowledge in specific contexts instead of testing for accuracy alone. Avoid the use of knowledge recognition and recall through the elimination of multiple-choice questions. Decrease the use of essays that focus on the regurgitation of knowledge from one source and that require repackaging the information as the substance of the assessment. Instead, require examples from the lives of students that focus on the application of knowledge to solve real-world or hypothetical problems.

Emphasize problems that are authentic and relevant to students and allow for numerous creative alternative solutions instead of one correct answer. Finally, ask students about their feelings and emotions as a result of acquiring specific knowledge because AI does not have the capability to feel or experience emotions or physical sensations. When a student can explain how they personally will use the knowledge, AI is of minimal value.

Focus on the future

Artificial intelligence has a tough time predicting the future, except in a very generalized sense. For example, projecting the impact of introducing a new company product or service is beyond the ability of most current AI systems. From a strategic planning or business challenge perspective, develop hypothetical situations that bear little relation to existing published work.

I asked Chat GPT, "If you knew you were going to be stranded on a desert island, what three items would you pack for the trip?" Before suggesting a knife, water filtration, and matches, the system indicated, "As an artificial intelligence language model, I don't have physical needs or desires." Not much help from AI beyond the knowledge of a 6-year-old Cub or Girl Scout!

On a more pragmatic basis, pose a hypothetical question to decision-makers, such as, "What would we do if one of our product lines was declared illegal?" These types of scenarios render AI pragmatically useless, and when I asked similar type queries, Chat GPT repeatedly responded, "I'm sorry, but without more information, I cannot determine the question answer!"

By no means should the use of AI be avoided entirely, as AI is an easy, cost-effective way to resolve recurrent issues that have standardized solutions. Remember, the fact-based nature of AI does not exempt individuals or organizations from ethical and moral responsibilities (Golbin et al., 2020), including adherence to all copyright and intellectual property laws. Responsible AI includes the realization that professional or academic integrity is a complex issue with many contributing factors and consequences. Addressing the underlying causes of unethical AI use can help to reduce the incidence of misuse and create a more supportive and equitable learning and performance environment, which includes the use of emerging technologies.

References

Bell, V. (2010) Don’t touch that dial! A history of media technology scares, from the printing press to Facebook. https://slate.com/technology/2010/02/a-history-of-media-technology-scar…

ChatGPT. (2023a, February 13). Define artificial intelligence in one sentence [Response to question]. ChatGPT. https://chat.openai.com/chat/868d699c-7871-4b90-aee8-041aba7ba35d

ChatGPT. (2023b, February 14). What are the limits of artificial intelligence? [Response to question]. ChatGPT, https://chat.openai.com/chat/d5dce8a0-feb7-44d8-bdac-1095477c0d8c

Coil, C. (2014). Creativity in an assessment driven environment. Knowledge Quest, 42(5), 48-53.

Golbin, I., Rao, A. S., Hadjarian, A., & Krittman, D. (2020, December). Responsible AI: a primer for the legal community. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 2121-2126). IEEE.

Ibarra, H., Rattan, A., & Johnston, A. (June 2018). Satya Nadella at Microsoft: Instilling a growth mindset. London Business School.

O'Boyle Jr, E. H., Humphrey, R. H., Pollack, J. M., Hawver, T. H., & Story, P. A. (2011). The relation between emotional intelligence and job performance: A meta‐analysis. Journal of Organizational Behavior, 32(5), 788-818.

advertisement
More from Bobby Hoffman Ph.D.
More from Psychology Today
More from Bobby Hoffman Ph.D.
More from Psychology Today