LANSING, Mich. (Michigan News Source) – In a shocking report coming out of the Foundation for Freedom Online (FOF), we have learned that the National Science Foundation (NSF) is handing out millions of dollars in grant money to private companies and universities in order to better control what is said and seen on social media and to develop huge databases to track people using AI-based systems.
The NSF has spent $38.8 million on government grants to fund the censorship of “misinformation” since Biden was elected, with 64 grants given to 42 different colleges and universities for them to develop censorship tools and operations.
MORE NEWS: Michigan’s Largest Teachers’ Union Slowly Growing Again After 30% Drop Since Right-to-work
Although this seems like a scene out of a science fiction movie, the facts are undeniable. Tools are being developed all over the country in colleges and universities to assist the federal government, through NSF grants, to disregard the first Amendment which says explicitly that they can’t abridge the freedom of speech.
What is coming out of the NSF is called the “Convergence Accelerator Track F domestic censorship project” and it includes a plethora of government-funded censorship technologies. With a budget of about $10 billion (and a recent 18% increase requested from Congress), the Foundation serves as a huge federal source of academic research funding and they make up 90% of the university research grants concerning computer science.
Although the Convergence Accelerator program was started under the Trump administration, it was meant to integrate research and innovation into areas like quantum technology which heavily impacts things like space exploration.
However, the FOF says that the Biden Administration added “Track F” which is dedicated to the “science of censorship.” The NSF’s 2022 literature says that it was created because “Although false claims…have existed throughout history, the problems that they cause have reached critical proportions” and the NSF describes the program as one that builds “trust and authenticity in communications.”
According to the FOF, the goals of Convergence Accelerator Track F are quite similar to the ones that came out of the “military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad.”
The primary goals of DARPA (Defense Advanced Research Projects Agency) are listed as:
- Detecting, classifying, measuring and tracking what they call “disinformation” ideas and memes
- Recognizing persuasion campaign structures and influence operations across social media sites and communities
- Identify participants and intent, and measure effects of persuasion campaigns
- Counter messaging of detected adversary influence operations
MORE NEWS: Long Overdue: Michigan Library Book from the Nixon Era Finally Returned
This new “Science of Censorship,” as the FOF calls it, includes WiseDex, which is led by the University of Michigan and promoted on their website for the “School of Information Center for Social Media Responsibility” page.
The FOF reports that WiseDex “harnesses the wisdom of crowds and AI technology to help flag more (social media) posts.” The WiseDex promo video says that the result is a “more comprehensive, equitable and consistent enforcement, significantly reducing the spread of misinformation.”
FOF says that WiseDex is building “sprawling databases of banned keywords and factual claims to sell to companies like Facebook, YouTube and Twitter. It then integrates the banned-claims databases into censorship algorithms, so that ‘harmful misinformation stops reaching big audiences.’”
WiseDex has received $750,000 in taxpayer funded grants during Phase 1 of their project with another potential $5 million available.
In order keep track of so much information and people, another grant was given out to “Course Correct” which is led by the University of Wisconsin.
The FOF says their program, which includes a dashboard they have developed, allows the US government to “directly and deliberately” sponsor the construction of massive databases of “misinformation tweeters, misinformation retweeters, misinformation followers and misinformation followees” for censorship and tracking. FOF says that Course Correct primarily targets Americans who are “thought dissidents” in two main topic areas: vaccine hesitancy and electoral skepticism.
To assist in the “science of censorship,” a DARPA worker is now in the NSF’s Convergence Accelerator program. Dr. Douglas Maughan, a previous DARPA Program Manager, now oversees the Track F censorship projects. He also previously worked for the National Security Agency and the Department of Homeland Security.
Over the past few years, we have seen governments, the media, social media platforms and self-appointed “fact-checkers” being the arbiters of truth, deciding who gets flagged, banned, suspended, deleted and canceled.
Now we have Michigan State University using an NSF grant to study local social media community groups in an effort to build better tools to control how they operate.
In the fall of 2022, MSU received $203K in grant money to help assist the federal government with censorship information gathering regarding “threats in local communities.” Their research involves mapping “how volunteer moderators are functioning as gatekeepers of local civic information and first responders to information threats within online local community groups.”
The grant information says that the project is a “crucial first step toward building effective tools (social, technological and policy) that can support moderators of online community groups in their efforts to respond to the threats caused by information pollution.”
So while Americans are signing up to join popular local community social media groups, MSU is working to make sure the content moderators are censoring information the right way as they face “significant threats to information quality” because they are untrained volunteers who are serving as the “first line of defense.”
The project has three goals: 1. Addressing how well the platform companies prepare local group moderators to face information threats. 2. Understanding how local group moderators manage information threats in their everyday practices. 3. Understanding how moderator practices respond to challenges during periods of increased information quality threats.
The research will expand the knowledge of how to better support volunteer moderators on the digital platforms of Facebook, Nextdoor and Reddit, who they say are the gatekeepers of a local information infrastructure. This will be done by an analysis of the tools and training used and provided to moderators; an eight-week remove community study with moderators of local groups; and follow-up interview study examining moderator practices.
The collaborative grant was given to ComArtSci professor and Associate Dean for Strategic Initiatives, Kjerstin Thorson, to oversee the project in collaboration with Pennsylvania State University and Arizona State University.
Thorson said in an online MSU press release, “As local news media has declined, digital platforms like Facebook and Next-door have quickly become sources of local information for many people – in some communities, online groups are one of the last remaining places to find out what is happening locally.” She continued, “We decided we wanted to better understand the ‘gatekeepers’ of these local groups: the volunteer content moderators who decide what is acceptable to post, whether to delete or disallow certain kinds of content, and who supports the kinds of connections and conversations that happen in these spaces. These new ‘gatekeepers’ often face difficult situations, such as having to deal with partisan disagreements or even the presence of disinformation.”
Michigan News Source reached out to Michigan State University about the progress of this grant which was awarded in September of 2022 but Sydney Hawkins, Director of Public Relations for the University, said, “There is nothing new to report on the grant right now since the project is just getting started.”
In the aforementioned MSU press release, it says that the project will be focusing on the 2024 presidential election because national politics plays a large role in shaping how communities talk about local issues.
Michigan State University also received an additional $109K from the NSF for something called “Extracting the Backbone of Unweighted Networks” which boils down for grant money for analyzing best methods to collect and analyze censorship data. The NSF website concerning the grant says that “Backbone extraction involves identifying and retaining only the most important relationships, which yields a simpler network that can be more readily analyzed and visualized.”
Also, in addition to the WiseDex project that is out of the University of Michigan, the U of M also received $83K for a project to “measure protest legacies in non-democratic states” and how large scale protests are missed opportunities to improve understanding of why some cases of mobilization leads to renewed protest and unexpected regime change and others build regime support.
Although the federal government has been on a path to control the freedom of speech in America, the American people have been, so far, moderately protected by their own ‘gatekeepers’ including the Supreme Court.
Supreme Court Justice Anthony M. Kennedy wrote in 2002, “First Amendment freedoms are most in danger when the government seeks to control thought or to justify its laws for that impermissible end. The right to think is the beginning of freedom, and speech must be protected from the government because speech is the beginning of thought.”
Leave a Comment
COMMENTS POLICY: We have no tolerance for messages of violence, racism, vulgarity, obscenity or other such discourteous behavior. Thank you for contributing to a respectful and useful online dialogue.