TOC

Scroll Down

Scroll Down

Back To Quality Contents

H2 Deck By Bold Name

h2 xxxxxx

H1 xxxxxx

h2 xxxxx

eclipse, solar eclipse, galaxy

AI

Let’s debunk five common myths that hinder effective AI use. By Attrayee Chakraborty 

Age of AI:

How to Utilize AI the ‘Right’ Way in Quality 

Critical Thinking in the

 

AI

H2 Deck Info By Paragraph Style Bold

Headline

The application of AI in quality receives mixed responses: to some, it solves many problems; to others, it feels like Pandora’s box. With almost 4 years elapsed since the release of OpenAI models, we have seen the world evolve from a generation of prompt engineering to agentic AI. We have seen the application of AI bleed into not just creation of images, but also into enterprise AI systems. We have seen organizations move from limiting AI usage to establishing AI governance models. Most importantly, we have seen regulatory bodies also move to the world of using AI in the evaluation of their submissions. With increasingly widespread adoption of AI on almost all forums, the question is no longer on how do we use AI: but rather, how do we use critical thinking to use AI intelligently and strategically? 

Critical Thinking 

The concept of critical thinking is a tale as old as time—the first mention arises in the time of philosophers like Socrates. In the modern world, the core of critical thinking can be simply defined as “…careful goal-directed thinking”. In the context of healthcare, critical thinking (critical appraisal) arises in medical evaluations with the rise of evidence-based thinking. Eventually, we can see the concept of critical thinking spreading its roots across to regulators and industry associations—with the FDA encouraging it in its guidance: “Creating a culture that values and rewards critical thinking and open, proactive dialogue about what is critical-to-quality …going beyond sole reliance on tools and checklists, is encouraged.” ISPE GAMP ® Good Practice Guide also discusses how critical thinking based on product and process knowledge and quality risk management can enable the removal of barriers to the introduction of new and innovative technologies. With AI as the latest innovative technology, the challenge is clear: how can we ensure critical thinking guides our engagement with it, turning potential pitfalls into practical advantages? 

Let's explore this by debunking common myths that hinder effective AI use in quality systems

Myth #1: AI in demo is great, but does not work in a production-level environment.  

Fact #1: As most of us know, AI works only as well as its user. That’s where critical thinking becomes key: critical thinking in understanding why and how to prompt an LLM is critical. In the world of quality, asking it to generate a problem statement, for instance, without any context would not generate a desired or usable result. Providing contextual information—not just of the problem we seek to solve, but also the acceptable quality standard of a model response—would allow the LLM to generate a response which is attuned to the organization’s processes.  

How does critical thinking work here? As a quality professional, we can identify the requirements a model would require in order to contextually answer questions. Our direction and guidance would shape the performance of the model, and having clarity of thought in the “reasoned approach” would be reflected in the workflows we create with AI. The more specific and rationalized the logic flow and source data is built, the better the output of the model. If you are not working with enterprise integration, even critically thinking about the depth of a prompt can result in drastic improvements in the output as a result of prompt engineering.  

Myth #2: We cannot work with AI at an enterprise level since our data is not clean. Too high investments, too poor results.  

Fact #2: As quality professionals, we thrive in the spirit of continuous improvement. When no golden reference exists, we can use critical thinking to create one. Using our critical thinking hat, we can identify some good examples of quality related documentation which meets our acceptable criteria. Acceptability criteria may be related to completeness and thoroughness of rationale, impact assessments and use of holistic and cross-functional thinking for evaluating changes to a quality management system. Using a risk-based scoring system for identifying “good” data quality for training models and evaluation of model outputs can generate a data warehouse that employees never knew existed.   

Myth #3: AI hallucinates more for our systems. Better to not use it!  

Fact #3: The output is as good as the input. AI can act as a mirror for quality management systems. Using the critical thinking principle of asking “why” AI would have a higher chance to hallucinate, a quality professional can arrive at multiple systemic gaps, for example, misaligned procedures may cause a model to hallucinate. A quality professional can use these instances of anomalous AI responses to dig deeper into misalignments in the QMS and identify areas of improvement.  

Myth #4: We don't have enterprise AI licenses, and most standards are proprietary—we can't upload them to models. What's the point? 

Fact #4: While this is truly a challenge and depends on organizational architecture, a quality professional can always use publicly available regulations on open source LLMs. Appropriate prompt engineering coupled with this approach can help parse regulations, simplify them and help map regulations onto core areas of the concept of a QMS. One can also generate relevant audit questions for specific sections of the regulations. This analysis can then be used in an organizational context in performing a self-audit on internal processes and establishing compliance with the publicly available regulations. Again, critical thinking related to prompt engineering to produce outputs in a format which is contextually aligned with the QMS can yield great results. 

Myth #5: I just need the right AI vendor to solve my problems. What has most features would be the best match for me.  

Fact #5: Using the hat of critical thinking, question the “why” and the “how.” Shoehorning a solution into a problem may not be the best approach and can add to additional costs of vendor assessment, onboarding, evaluation and validation. It is important to think critically about the problem statement, and what the gap in achieving the ideal future state is. Identifying the gaps through value stream maps can help diagnose the root causes behind process inefficiencies and provide solutions accordingly. The root cause may often be related to resources, organizational culture or other factors. Having an AI solution may not always solve the root cause.  

Similarly, even if it is agreed upon that AI can help alleviate some of the symptoms or root causes, selection of the AI vendor must be done with the problem statement in mind. An AI solution having multiple features may not be the best path forward when it comes to integrating with the existing architecture in the organization. Choosing the “why” over the “what” can make all the difference.  

Conclusion 

In the age of AI, critical thinking remains the cornerstone for quality professionals to harness its potential responsibly and effectively. By debunking myths through reasoned prompting, data curation, systemic reflection, regulatory mapping, and precise problem-solving, AI transforms from a Pandora’s box into a powerful ally for innovation and compliance. Embracing this mindset by quality professionals can support organizational excellence and ensures not just better AI outputs, but also continuous improvement in quality management systems, paving the way for a future where technology amplifies human judgment rather than replacing it. 

Disclaimer  

The views and opinions expressed in this article are solely those of the author and do not represent the views, positions, or policies of their respective employers, or any of their affiliates, directors, officers, or employees. The content is provided for informational purposes only and should not be construed as official guidance, policy, or endorsement by any organization with which the author is affiliated. 

Opening Background Image Source: metamorworks / iStock / Getty Images Plus via Getty Images.

Attrayee (Atty) Chakraborty is an award-winning Quality Systems Engineer at a Digital Healthcare division where she drives QMS processes and continuous improvement for enterprise excellence. Attrayee has previously spoken on AI regulation at 30+ national and international conferences. She is also the co-editor of the American Society for Quality (ASQ) Medical Device Division newsletter, a Center lead of Pathway for Patient Health, and a working member of IEEE P3396 standards (Recommended Practice for Defining and Evaluating AI Risk, Safety, Trustworthiness, and Responsibility). Attrayee is recognized as an AI community leader by the Regulatory Affairs Professional Society (RAPS), a Board Director at Boston Congress of Public Health and is a recipient of the RAPS Rising Star Award (2025) and Quality Rookie Award (2025). You can reach her via LinkedIn.