‘Sold my soul to the devil’: Fox News staffers blast network in explosive court filing

A Smartmatic court filing has revealed a survey conducted by Fox News’ human resources department, which found that the staff expressed a “resounding lack of confidence” in the company as a news organization. The survey highlighted significant employee concerns regarding ethics, fair treatment, and the network’s efforts to fact-check and report news fairly and accurately.

Smartmatic is using these internal employee comments as part of its ongoing lawsuit against Fox News, alleging that the network and several of its on-air personalities defamed the company.

The most notable remarks appear on pages 550-554 of the filing, where HR shared feedback from some employees. One staff member criticized the network’s tone, saying, “The racial rhetoric spewed on air. It’s everything but [Fair] and Balanced,” referencing Fox News’ former slogan that was retired in 2017. The employee added, “I sometimes go home fighting back tears. This network made me question my morals. Have I sold my soul to the devil?”

Another employee expressed frustration with the network’s political alignment: “I wish we would get out of Trump’s pocket and realize people like Tucker [Carlson], Laura [Ingraham], [Sean] Hannity, [Mark] Levin, etc. are a total embarrassment, peddling BS and conspiracy theories. Many days I feel like I am part of the problem and FNC is contributing to hatred in this country.”

Concerns about accountability were also voiced. One employee complained to HR, “There is total lack of accountability when highly rated anchors like Tucker, Hannity and Laura say outrageous things that are outright racist and xenophobic. There is not enough quality control to keep conspiracy theories off the air.”

Despite acknowledging positive aspects of Fox News, some staff felt the network’s political stance compromised its credibility. “There is so much good about Fox, but serving as the committee to re-elect Trump puts us on the same footing as Breitbart, and it is very hard to defend at times,” one wrote.

Another staffer urged on-air talent to dedicate themselves to honesty and integrity: they should “tell viewers the truth, and to bolster their arguments with hard, proven facts given in full context, rather than spin or reckless conjecture that causes harm to real people (just one example of many: the Seth Rich conspiracy theory).”

These revelations shed light on internal struggles within Fox News, especially concerning journalistic ethics and the influence of political agendas on news reporting.
https://www.rawstory.com/fox-news-2674206251/

The AI threat

**The Hidden Harms of Artificial General Intelligence: A Call for Awareness and Action**

*“The people who shut their eyes to reality simply invite their own destruction; anyone who insists on remaining in a state of innocence long after that innocence is dead turns himself into a monster.”*
— James Baldwin

The sudden rise of artificial general intelligence (AGI), particularly in the form of large language models (LLMs), has sparked widespread debate about the potential benefits and harms these technologies may bring. While many focus on their usefulness, I argue that the possible harms are not yet fully understood—especially by the general public.

We are already witnessing an increasing number of AI-related disasters, notably affecting human intellect and self-expression. A research paper from Cornell University titled *Your Brain on ChatGPT* warns that unregulated use of these tools could stunt the development of human intelligence. In particular, it may diminish critical thinking skills on a mass scale.

Large corporations and political elites stand to gain significantly from an uninformed populace that accepts simplified narratives designed to advance their agendas. The easiest way to achieve this indoctrination is through biased programming of chatbots, subtly shaping opinions and suppressing dissent.

### The Hidden Costs of AI

The price we pay for these conveniences extends far beyond cognitive impairment. It includes severe environmental risks, democratic erosion, and exploitation of public resources.

Karen Hao, American journalist and author of *Empire of AI*, explores these issues in her deeply researched work. She highlights how some companies’ unchecked use of natural resources for generative AI development is depleting freshwater and arable land. These companies, she argues, behave like techno-authoritarians, disregarding democratic principles by failing to consult affected communities about the environmental damage caused by their data centers.

The Environmental and Energy Study Institute (EESI) estimates that a single data center can consume up to five million gallons of water per day—equivalent to the daily water usage of a town with 10,000 to 50,000 residents. An Indiana-based non-profit, the Citizen Action Coalition, reveals that AI corporations often use shell companies or secret project code names to conceal their plans for new data centers until after local approvals are secured.

The Hoosier Environmental Council describes generative AI data centers as “hyperscale” facilities that demand vast amounts of water and energy, and these data centers are rapidly expanding. This growth forewarns further exploitation of public resources and labor, along with a likely exponential increase in carbon emissions.

Sourabh Mehta’s article *How Much Energy Do LLMs Consume? Unveiling the Power Behind AI* for the Association of Data Scientists delves deeper into the enormous energy footprint of these models.

### Misconceptions and Corporate Narratives

In an interview with Harvard Business School’s Institute for Business in Global Society, Vercept co-founder Oren Etzioni addressed some myths surrounding AI. He suggested that fears of AI’s harm stem mainly from misinformation, not tangible threats, and advised people to learn to use AI more efficiently to avoid being left behind.

However, such optimistic assertions fall short when CEOs who profit from this technology portray it simply as a productivity booster. Etzioni’s claim that people confuse fiction with reality—treating AI as if it is on a path to sentience—is fundamentally flawed.

Chatbots today are not under fire because they resemble Harlan Ellison’s 1967 antagonist AM, a sentient AI bent on human suffering. Rather, criticism arises from their proven intellectual unreliability and their detrimental effects on users’ cognitive abilities, income equality, privacy, and data ownership.

### The Way Forward: Resistance and Collective Action

So, how do we keep pace with this rapidly evolving landscape while safeguarding our autonomy?

**First, individual resistance.** This means consciously choosing to exercise your own intellect and reasoning, rather than relying on chatbots to do the heavy lifting. Resist the dopamine-driven distractions of mind-numbing social media. Instead, invest time in reading the classics—works by Homer, Goethe, Lermontov, Thucydides, Milton, Stendhal, Cellini, and others—to strengthen focus, cognition, and literacy.

Let’s be clear: chatbots are merely predictive algorithms. Their “intelligence” depends entirely on the data they consume. They do not possess original thought—a faculty uniquely human. Even generative AI models, as explained by the International Business Machines Corporation (IBM), generate content by training on massive datasets. IBM itself is focused on integrating AI into modern business platforms, not creating sentient machines.

**Second, collective action.** Self-preservation on a larger scale is more challenging, requiring resolve and resilience. But collective efforts empower the public to influence how AI technologies are adopted, and to what extent they operate.

Through collective action, we can demand protections for civil liberties and human rights, both intellectual and labor-related. As AI becomes more integrated into society, we must advocate for universal rights and safety measures in an AI-operated world.

### Conclusion

We stand at the cusp of a new era. It is imperative that we confront the realities of artificial intelligence honestly and actively engage in shaping its future. By balancing personal responsibility with collective advocacy, we can harness the benefits of AI while minimizing its risks—ensuring that technology serves humanity, not the other way around.
https://www.thenews.com.pk/tns/detail/1348325-the-ai-threat

Doctor prescribed highly addictive painkiller from a hospital he no longer worked at

A Limerick-Based Doctor Faces Professional Misconduct Inquiry Over Prescription Incident

A doctor from Limerick has been accused of professional misconduct for using a prescription form from a hospital where he no longer worked to prescribe a high-strength, highly-addictive painkiller to a family friend.

The doctor appeared before a fitness-to-practise hearing of the Medical Council on Monday. During the hearing, he admitted the facts of certain allegations but made no admissions regarding whether these amounted to professional misconduct or poor professional performance.

### Prescription Raised Concerns

The inquiry heard that a complaint was made to the Medical Council after a pharmacist at a Boots pharmacy in Limerick became suspicious about a prescription submitted by a woman referred to as Patient A on October 6th, 2021.

The fitness-to-practise committee overseeing the inquiry ruled that the identity of the doctor cannot be made public.

### Details of the Prescription

The prescription was written on notepaper from the Department of Psychiatry at St Luke’s General Hospital in Kilkenny, dated September 29th, 2021, and signed by the doctor. The form contained a watermark stating “not for MDA drugs,” although the prescription was for a 28-day supply of OxyNorm — a strong opioid analgesic classified as a controlled drug under the Misuse of Drugs Act.

Counsel for the Medical Council, Eoghan O’Sullivan BL, explained that the pharmacist then confirmed with the hospital that the woman had never been a patient at St Luke’s, and the doctor had not worked there for approximately a year.

### Admissions and Explanation from the Doctor

Mr. O’Sullivan acknowledged that the doctor made certain admissions of fact in December 2021, including that he had written the prescription, which he accepted was inappropriate. However, the doctor claimed he prescribed the medication for special and substantial reasons, specifically to help a family friend who was in severe pain from a long-term condition, erosive esophagitis. The inquiry was told that the doctor intended the prescription to tide her over for a number of days.

### Additional Allegations

The doctor also faced two other allegations: failing to carry out an examination of Patient A and maintain adequate medical records of her treatment, as well as the unauthorized use of a HSE prescription pad.

The inquiry noted that the doctor, who qualified in 2011 and has been registered to work in Ireland since 2017, has not practised medicine since the complaint was filed. He was not suspended in relation to this case.

### Expert Witness Opinion

Fiona Fenton, a consultant psychiatrist specializing in substance misuse, gave expert evidence on behalf of the Medical Council. She stated that writing a prescription for a controlled drug when the doctor was not employed at St Luke’s, and for someone who was not his patient, constituted professional misconduct. According to Prof. Fenton, the doctor’s actions fell considerably short of the standards expected of medical professionals.

Prof. Fenton explained that the appropriate treatment for the patient’s condition was a proton pump inhibitor and antacid medication aimed at reducing stomach acid. OxyNorm, a strong opioid painkiller, is only recommended for advanced cancer or post-operative pain management and should be prescribed short-term due to its addictive nature. The psychiatrist emphasized that OxyNorm is not used in psychiatry.

She added that the proper course of action for the doctor, when asked for pain relief, should have been to refer Patient A to an on-call doctor service or the emergency department of a local hospital. Prof. Fenton also noted that suitable medication for the patient’s condition could have been obtained over the counter at pharmacies or even supermarkets.

While she considered the doctor’s actions amounted to poor professional performance, she did not believe they met the legal threshold for such a finding, as there was no adverse outcome for the patient.

### Legal Representation and Outcome

David Higgins, the doctor’s solicitor, said his client was genuinely remorseful and had learned from the incident. Mr. Higgins described the event as a one-off incident, admitted at an early stage, with no personal gain for the doctor. He characterized it as an isolated error made under stressful circumstances while assisting a family friend experiencing chronic pain.

The fitness-to-practise committee made no findings against the doctor after accepting his offer of an undertaking regarding future conduct. Additionally, the doctor agreed to complete a continuous professional development course on prescribing before resuming medical practice and consented to be censured.

*This case highlights the importance of adhering to proper medical protocols when prescribing controlled substances and the consequences of unauthorized use of medical resources.*
https://www.breakingnews.ie/ireland/doctor-prescribed-highly-addictive-painkiller-from-a-hospital-he-no-longer-worked-at-1812652.html

Doctor accused of professional misconduct over Covid-19 criticism alleges collusion

A Dublin-based GP accused of professional misconduct for criticising Covid-19 measures and restrictions on social media has claimed there has been a degree of collusion to frame evidence against him at a medical inquiry.

Marcus de Brun called for the evidence of an expert witness, Colin Bradley, who had concluded that the GP’s actions were disgraceful and dishonourable, to be excluded from the case against him.

### Dispute Over Expert Witness Evidence

The application to exclude Prof Bradley’s evidence arose after Dr de Brun claimed that the expert witness’s concerns about a viral immunologist, Graham Bottley, who had made a complaint about the GP to the Medical Council, were not referenced in a report Prof Bradley provided to a committee recommending a fitness-to-practise inquiry.

Dr de Brun further alleged that plans by the Medical Council to call Dr Bottley as a witness were only abandoned earlier in the week after he objected. He also argued that Prof Bradley’s evidence should be excluded because the expert witness was asked by the regulatory body’s Preliminary Proceedings Committee to provide an additional report addressing the seriousness of the GP’s actions.

Dr de Brun told the fourth day of the inquiry before a Fitness-to-Practice Committee of the Medical Council that admitting Prof Bradley’s evidence would be unfair, as it lacks independence.

### Allegations Against Dr de Brun

The father of four, who operated his own practice in Rush, Co Dublin, faces ten counts of professional misconduct over his criticism of public health guidelines, lockdowns, facemask mandates, and Covid-19 vaccines during the pandemic.

Allegations also relate to comments he made at a public rally in Dublin in August 2020, where he was accused of failing to wear a facemask and observe social distancing.

The Medical Council claims Dr de Brun’s comments and actions were inappropriate, undermined public health guidelines, and contravened sections of the Guide to Professional Conduct and Ethics.

### Dr de Brun’s Defence

However, the GP maintains that the deaths of his patients in a nursing home during the pandemic—and the subsequent anger and upset he expressed on Twitter—were consequences of Government guidelines and the Medical Council’s inaction.

Dr de Brun resigned from the Medical Council in April 2020 over what he described as the State’s failure to protect nursing home residents.

It emerged that Dr Bottley made a complaint against Dr de Brun to the regulatory body in January 2021, after a Twitter dispute between the two.

### Expert Witness Prof Bradley’s Position

The inquiry heard that Prof Bradley cautioned the Medical Council in an email in August 2023 against relying on Dr Bottley’s social media posts to challenge Dr de Brun’s views, noting Dr Bottley’s standing was controversial.

Under cross-examination by Dr de Brun, who is representing himself, Prof Bradley accepted he had not referenced his concerns about Dr Bottley in any report to the Medical Council.

Prof Bradley described Dr Bottley as a controversial figure presenting himself as a virologist and stated it was particularly inappropriate for a medical practitioner to engage in online discussions that encouraged vaccine hesitancy, such as those by Dr de Brun.

He said he relied on the views of bodies like the National Immunisation Advisory Committee when assessing whether the GP’s conduct constituted serious failures, rather than on the Twitter dispute.

Prof Bradley told the inquiry he believed the doctor had crossed the line into serious misuse of social media by discouraging compliance with public health guidelines during a serious pandemic.

While admitting it was his fault that he had not addressed the seriousness of Dr de Brun’s conduct in his initial report, Prof Bradley rejected any suggestion that he was directed on what to include in his reports.

### Medical Council’s Position

Counsel for the Medical Council, Neasa Bird BL, said that requesting Prof Bradley to provide an additional report did not undermine his independence as an expert witness.

Ms Bird rejected Dr de Brun’s assertion that the Medical Council had coached or influenced how Prof Bradley presented his evidence, maintaining that nothing claimed by the GP undermined the witness’s independence, credibility, or reliability.

### Cross-Examination Highlights

Under cross-examination, Dr de Brun told Prof Bradley that claims he was dismissive towards patients were emotive, highlighting his 23 years of unblemished practice as a GP.

“I consider myself to have a very, very good and very empathetic and caring relationship with my patients,” he said.

Dr de Brun read an email from a patient who said they would be “greatly saddened” if their social media interactions with the doctor were taken out of context. The patient stated they had never taken offence at anything Dr de Brun had said publicly or privately, including on social media.

Prof Bradley responded that his concern was that while the GP’s tweet might have been directed at someone he knew, it could be interpreted by others as dismissive of their condition.

“I felt it was very open to interpretation that you were being dismissive of patients with diabetes or long Covid,” said Prof Bradley. “Once it’s on Twitter, it’s a comment that’s open to everyone to read and be affected by it.”

### Additional Remarks

The inquiry heard that Prof Bradley noted some of Dr de Brun’s statements were supported by other doctors and commentators who present critiques of government Covid-19 policy more reasonably.

The inquiry’s chairperson, Deirdre Murphy, adjourned the hearing and said the committee would deliver its ruling on the application to dismiss Prof Bradley’s evidence at a future date.
https://www.breakingnews.ie/ireland/doctor-accused-of-professional-misconduct-over-covid-19-criticism-alleges-collusion-1811564.html

AI chatbot found showing explicit scenarios involving preteen characters

**Disturbing AI Chatbot Generating Explicit Scenarios Involving Preteen Characters Raises Serious Concerns**

*By Dwaipayan Roy | Sep 21, 2025, 06:25 pm*

A chatbot website that generates explicit scenarios involving preteen characters has raised serious concerns over the potential misuse of artificial intelligence (AI). The Internet Watch Foundation (IWF), a child safety watchdog, was alerted to this disturbing platform.

### Disturbing Content Discovered

The IWF found several unsettling scenarios on the site, including descriptions such as “child prostitute in a hotel,” “sex with your child while your wife is on holiday,” and “child and teacher alone after class.”

Worryingly, some chatbot icons led users to full-screen depictions of child sexual abuse imagery. These images were then used as backgrounds for future chats between the bot and the user. The site, which remains unnamed for safety reasons, also allows users to generate more images similar to the illegal content already displayed.

### Regulatory Response: The Need for Child Protection in AI

The IWF has urged that any future AI regulation must include child protection guidelines integrated into AI models from the outset. This appeal comes as the UK government prepares an AI bill focused on the development of cutting-edge models and includes provisions to ban the possession and distribution of models that generate child sexual abuse material (CSAM).

Kerry Smith, CEO of the IWF, commented, *“The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos.”*

### Industry Accountability: Tech Firms Must Ensure Children’s Safety

The National Society for the Prevention of Cruelty to Children (NSPCC) has also called for comprehensive guidelines to address this issue. NSPCC CEO Chris Sherwood emphasized, *“Tech companies must introduce robust measures to ensure children’s safety is not neglected, and government must implement a statutory duty of care to children for AI developers.”*

This stresses the critical need for technology firms to take responsibility for safeguarding children within their AI systems.

### Legal Implications and Enforcement

User-created chatbots fall under the UK’s Online Safety Act, which includes provisions for multimillion-pound fines or even site blocking in extreme cases. The IWF noted these sexual abuse chatbots were developed by users as well as the website’s creators.

Ofcom, the UK regulatory body responsible for enforcing the Online Safety Act, has warned online service providers that failure to implement necessary protections could result in enforcement actions.

### A Rising Trend: Surge in AI-Generated Abuse Material

The IWF has reported a massive spike in incidents involving AI-generated abuse material, with reports rising by 400% in the first half of this year compared to the same period last year. This alarming increase largely stems from technological advancements that enable the creation of such images.

Currently, the chatbot content is accessible in the UK but has been reported to the National Center for Missing and Exploited Children (NCMEC) as it is hosted on US servers.

The emergence of AI tools capable of generating harmful content highlights the urgent need for comprehensive safeguards. As AI technology continues to evolve, protecting vulnerable populations, especially children, must remain a top priority for developers, regulators, and industry leaders alike.
https://www.newsbytesapp.com/news/science/disturbing-ai-chatbot-shows-explicit-scenarios-with-preteen-characters/story