(Harbinger’s Daily)—AI assistants, such as Grok and ChatGPT, carry an unsettling amount of influence on our society.
With the creators boasting about AI’s vastly superior intelligence, people often unquestioningly trust the answers the programs provide. Ask “Grok” about the origins of life, and you will get an answer detailing the “scientific consensus” involving billions of years of evolution. Neglecting to factor in the bias of those who programmed the technology, the validity of the response is overwhelmingly accepted by users.
But what happens when AI is challenged on the facts?
Calvin Smith, the executive director of Answers in Genesis Canada, in his new video series “A Talk With Grok,” discovered something astonishing—when you peer behind the bias and press the programming to logically examine the evidence, you get a very different answer.
Beginning by setting parameters to strip away ideologically driven answers, Smith asked Grok to apply only strict logic, mathematical probability, and observational science in its answers.
- Read More: harbingersdaily.com

