

Visit Paul's websites:
From Computing to Computational Thinking (computize.org)
Becoming a Computational Thinker: Success in the Digital Age (computize.org/CTer)

We all know that being objective is good. In fact most people are often confident in their own objectivity. However, such confidence can be severely challenged in the information age, with an overload of data coming our way from all directions. How do we tell the lies, half truths, and propaganda from facts? As responsible netizens, how do we stop rumors, sensationalism and get the facts out?
Actually, at a more basic level, what is objectivity anyway? How is it related to subjectivity? Why is objectivity a virtue? If so, how do we make ourselves more objective? Here we will discuss these and other questions.
This article is part of our ongoing CT blog published in aroundKent (aroundkent.net), an online magazine. Other enjoyable and engaging CT articles can also be found in the author’s book Becoming A Computational Thinker: Success in the Digital Age. For more information, please see the website computize.org/CTer.
Objectivity means seeing things as they are, not as we wish them to be. It is the practice of putting aside personal bias, emotion and prejudice to discover and understand facts. It is an intentional “practice” that requires attention and effort.
It does not mean stripping away humanity; it means giving reality a chance to speak for itself.
A judge is expected to be objective in weighing evidence, not in siding with friends. Ancient Chinese historian Sima Qian (司馬遷) insisted on

truth in his records, even when it angered emperors. Today his works remain invaluable because of that commitment to objectivity.
Objectivity is not just the absence of subjectivity. It is the presence of disciplined methods for reducing bias and aligning views with reality.

Objectivity is essential because decisions ripple outward—affecting individuals, communities, and entire societies.
Human brains are wired for survival, social belonging, and quick decision making, not particularly for cold logical reasoning or neutrality. Let’s look at some factors that can lead us astray.


These are just some of the many biases. We can also find many cases in daily life where people jump to conclusions or accept broad claims as if they were objective truths. On closer inspection, many of these beliefs are incomplete, misleading, or simply wrong:
In the information age, our challenge is greater than ever. Social media feeds are filled with rumors, half-truths, and outright lies—amplified by algorithms that reward clicks and outrage more than accuracy.

And here is where generative AI can help. Just as we once used dictionaries or encyclopedias to double-check claims, we can now ask AI systems to:
AI doesn’t decide truth for us—but it gives us a second lens, helping us cross-check quickly and avoid falling for what merely sounds true. Combined with Computational Thinking (CT), it can keep us anchored in reality while navigating the storm of online information.
Computational Thinking lets us apply key methods in digital computing to handle all kinds of problems including those in our daily lives. Computational thinkers do not tolerate lack of objectivity. Hence, CT is a mindset that helps us approach problems in a structured, disciplined way. Like exercise building strength, CT can build objectivity mental muscles. Here are some mental tools provided by CT.
Use these to detect and reduce your natural biases.
Powerful generative AI systems are at the forefront of technology and offer an instructive mirror. They are not “perfectly objective,” but their methods can demonstrate what true objectivity looks like in practice. AI doesn’t replace human wisdom—it shows us our blind spots.
1. AI uses objective methods.
2. AI and humans use the same foundations. Both rely on past data and accumulated facts.
The difference? AI sticks to procedure, while we may let emotions, social pressures, or personal perspective override our experience.

3. AI can help us practice objectivity.
4. AI has limits.
So AI is not inherently objective—it inherits biases and errors in whatever data we give it.
Why claim that objectivity is desirable at all? Isn’t subjectivity part of being human? To answer this fairly, we need to consider consequences of making decisions with/without objectivity.
Without objectivity, decisions shift with moods, favoritism, or pressure. That creates uncertainty and erodes trust. With objectivity, rules and criteria apply consistently, so others can predict outcomes. Societies, courts, and businesses function better when people trust that fairness outweighs personal bias.
Subjective judgment can distort reality (e.g., “the Earth is flat,” “disease is caused by bad spirits.” Objective reasoning—guided by evidence—has led to medicine, engineering, and technology that keep us alive and thriving. Objectivity has survival value.
If leaders, judges, or employers act on favoritism, injustice follows. History is filled with corruption and nepotism where objectivity was absent. Objective decision-making (e.g., merit-based exams, blind auditions in music) reduces unfair advantage and expands opportunity.
Human groups often believe falsehoods for centuries. Objective methods (like the scientific method, peer review, statistical analysis) provide a way to self-correct. Without objectivity, errors compound. With it, knowledge steadily improves.
Thus, from an outcome-based perspective, objectivity proves itself as a virtue: it produces more accurate knowledge, more fairness in society, and more trust in institutions.
History is filled with moments where bias, stubbornness, or subjective comfort led to disaster.
In each case, objective evidence existed. The failure was in ignoring it.
From personal conduct to social systems, objectivity must coexist with humanity. In person-to-person interactions, blunt truth without empathy can damage trust. A wise communicator practices compassionate objectivity— stating facts with timing, tone, and respect. Recognizing human imperfection is itself part of being objective. People are emotional and biased; pretending otherwise is unrealistic. Objectivity therefore requires patience toward others and humility toward oneself.
Societies also design systems to support objectivity: encouraging and training kids for independent thinking in schools, peer review, ethical codes, honors, and awards that reward fair and evidence-based behavior. These structures acknowledge that doing the right thing cannot rely on moral strength alone.
Yet incentives can corrupt if they become goals instead of means. When people chase recognition rather than truth, objectivity erodes. Thus, the ideal is balance: empathy in interactions, integrity in systems. Objectivity, when humanized, becomes a social art—individuals respect truth; institutions reward it; both evolve toward fairness.
Here are some rules and reminders that can make it easy for us to form a habit of being objective.
DOs
DON’Ts
Objectivity is not easy or automatic. But it is achievable with tools and discipline. To conduct an objective discussion, consider the golden rule: “leave people out and facts in.” This way, we can maintain our focus and not get distracted.
The information age brings challenges and solutions. Computational Thinking gives us the mindset: break down problems, focus on what matters, apply fair rules. AI systems give us the toolset: consistency, scale, and a mirror to our blind spots. Together, CT and AI make objectivity not just an abstract ideal, but a practical habit of mind.
If we pay attention to being objective and keep on learning and improving our way of thinking, we can make better decisions in science, business, governance, and everyday life. Not to mention we can see through scams easily. The point is simple: reality rewards those who face it objectively.
ABOUT PAUL
A Ph.D. and faculty member from MIT, Paul Wang (王 士 弘) became a Computer Science professor (Kent State University) in 1981, and served as a Director at the Institute for Computational Mathematics at Kent from 1986 to 2011. He retired in 2012 and is now professor emeritus at Kent State University.
Paul is a leading expert in Symbolic and Algebraic Computation (SAC). He has conducted over forty research projects funded by government and industry, authored many well-regarded Computer Science textbooks, most also translated into foreign languages, and released many software tools. He received the Ohio Governor's Award for University Faculty Entrepreneurship (2001). Paul supervised 14 Ph.D. and over 26 Master-degree students.
His Ph.D. dissertation, advised by Joel Moses, was on Evaluation of Definite Integrals by Symbolic Manipulation. Paul's main research interests include Symbolic and Algebraic Computation (SAC), polynomial factoring and GCD algorithms, automatic code generation, Internet Accessible Mathematical Computation (IAMC), enabling technologies for and classroom delivery of Web-based Mathematics Education (WME), as well as parallel and distributed SAC. Paul has made significant contributions to many parts of the MAXIMA computer algebra system. See these online demos for an experience with MAXIMA.
Paul continues to work jointly with others nationally and internationally in computer science teaching and research, write textbooks, IT consult as sofpower.com, and manage his Web development business webtong.com
