Listen Up

Saturday, July 27, 2024

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House


The executive order regarding privacy and security in the use of AI.  

The administration places the highest urgency on governing the development and use of AI safely and responsibly and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead for our security, economy, and society.

Although the United States lags behind the EU in formulating regulations to guide responsible use of AI, at the federal level some executive actions may stimulate Congress and other regulatory agencies to plan ahead.

It is the policy of my Administration to advance and govern the development and use of AI following eight guiding principles and priorities.  When undertaking the actions outlined in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:

If you do not wish to read the following governmental legalese I refer you to reference

The guidance uses some constitutional guarantees to assure privacy.

The guidance is overreaching the technical irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.  Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society. of AI.  It seeks to advance broad guidance to include DEI, social justice, and economic guarantees.   Administration places the highest urgency on governing the development and use of AI safely and responsibly and is therefore advancing a coordinated, Federal Government-wide approach to doing so.  The rapid speed at which AI capabilities are advancing compels the United States to lead for our security, economy, and society.

In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. 

Policy and Principles.  It is the policy of my Administration to advance and govern the development and use of AI following eight guiding principles and priorities.  When undertaking the actions outlined in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:

     (a)  Artificial Intelligence must be safe and secure.  Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.  It also requires addressing AI systems’ most pressing security risks — including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers — while navigating AI’s opacity and complexity.  Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies.  Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.  These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits. 

     (b)  Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.  This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators.  Across the Federal Government, my Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores — not just to study, but to stay — so that the companies and technologies of the future are made in America.  The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation.  Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and it requires supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs. 

     (c)  The responsible development and use of AI require a commitment to supporting American workers.  As AI creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities.  My Administration will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates.  In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.  The critical next steps in AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation.

     (d)  Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights.  My Administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.  From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.  Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.  My Administration will build on the important steps that have already been taken — such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) — in seeking to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.  It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government.  Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all.

     (e)  The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.  Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change.  The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.  Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.  At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability.

     (f)  Americans’ privacy and civil liberties must be protected as AI continues advancing.  Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires.  Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed.  To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks.  Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people’s data.

     (g)  It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.  These efforts start with people, our Nation’s greatest asset.  My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI.  The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used. 

     (h)  The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.  This leadership is not measured solely by the technological advancements our country makes.  Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly — and building and promoting those safeguards with the rest of the world.  My Administration will engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.  The Federal Government will seek to promote responsible AI safety and security principles and actions with other nations, including our competitors, while leading key global conversations and collaborations to ensure that AI benefits the whole world, rather than exacerbating inequities, threatening human rights, and causing other harms. 

     Sec. 3.  Definitions.  For purposes of this order:

     (a)  The term “agency” means each agency described in 44 U.S.C. 3502(1), except for the independent regulatory agencies described in 44 U.S.C. 3502(5).

     (b)  The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3):  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

     (c)  The term “AI model” means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.

     (d)  The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.

     (e)  The term “AI system” means any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.

     (f)  The term “commercially available information” means any information or data about an individual or group of individuals, including an individual’s or group of individuals’ device or location, that is made available or obtainable and sold, leased, or licensed to the general public or to governmental or non-governmental entities. 

     (g)  The term “crime forecasting” means the use of analytical techniques to attempt to predict future crimes or crime-related information.  It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics.

     (h)  The term “critical and emerging technologies” means those technologies listed in the February 2022 Critical and Emerging Technologies List Update issued by the National Science and Technology Council (NSTC), as amended by subsequent updates to the list issued by the NSTC. 

     (i)  The term “critical infrastructure” has the meaning set forth in section 1016(e) of the USA PATRIOT Act of 2001, 42 U.S.C. 5195c(e).

     (j)  The term “differential-privacy guarantee” means protections that allow information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities.  

     (k)  The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

          (i)    substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

          (ii)   enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

          (iii)  permitting the evasion of human control or oversight through means of deception or obfuscation.

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. 

     (l)  The term “Federal law enforcement agency” has the meaning outlined in section 21(a) of Executive Order 14074 of May 25, 2022 (Advancing Effective, Accountable Policing and Criminal Justice Practices To Enhance Public Trust and Public Safety).

     (m)  The term “floating-point operation” means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.

     (n)  The term “foreign person” has the meaning outlined in section 5(c) of Executive Order 13984 of January 19, 2021 (Taking Additional Steps To Address the National Emergency Concerning Significant Malicious Cyber-Enabled Activities).

     (o)  The terms “foreign reseller” and “foreign reseller of United States Infrastructure as a Service Products” mean a foreign person who has established an Infrastructure as a Service Account to provide Infrastructure as a Service Products subsequently, in whole or in part, to a third party.

     (p)  The term “generative AI” means the class of AI models that emulate the structure and characteristics of input data to generate derived synthetic content.  This can include images, videos, audio, text, and other digital content.

     (q)  The terms “Infrastructure as a Service Product,” “United States Infrastructure as a Service Product,” “United States Infrastructure as a Service Provider,” and “Infrastructure as a Service Account” each have the respective meanings given to those terms in section 5 of Executive Order 13984.

     (r)  The term “integer operation” means any mathematical operation or assignment involving only integers, or whole numbers expressed without a decimal point.

     (s)  The term “Intelligence Community” has the meaning given to that term in section 3.5(h) of Executive Order 12333 of December 4, 1981 (United States Intelligence Activities), as amended. 

     (t)  The term “machine learning” means a set of techniques that can be used to train AI algorithms to improve performance at a task based on data.

     (u)  The term “model weight” means a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs.

     (v)  The term “national security system” has the meaning outlined in 44 U.S.C. 3552(b)(6).

     (w)  The term “omics” means biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system.

     (x)  The term “Open RAN” means the Open Radio Access Network approach to telecommunications-network standardization adopted by the O-RAN Alliance, Third Generation Partnership Project, or any similar set of published open standards for multi-vendor network equipment interoperability.

     (y)  The term “personally identifiable information” has the meaning outlined in Office of Management and Budget (OMB) Circular No. A-130.

     (z)  The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality.  These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic data generation tools.  This is also sometimes referred to as “privacy-preserving technology.”

     (aa)  The term “privacy impact assessment” has the meaning outlined in OMB Circular No. A-130.

     (bb)  The term “Sector Risk Management Agency” has the meaning outlined in 6 U.S.C. 650(23).

     (cc)  The term “self-healing network” means a telecommunications network that automatically diagnoses and addresses network issues to permit self-restoration.

     (dd)  The term “synthetic biology” means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics.  Synthetic nucleic acids are a type of biomolecule redesigned through synthetic biology methods.

     (ee)  The term “synthetic content” means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.

     (ff)  The term “testbed” means a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including AI and PETs, to help evaluate the functionality, usability, and performance of those tools or technologies.

     (gg)  The term “watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI — including outputs such as photos, videos, audio clips, or text — to verify the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.
     Sec. 4.  Ensuring the Safety and Security of AI Technology.

     4.1.  Developing Guidelines, Standards, and Best Practices for AI Safety and Security.  (a)  Within 270 days of the date of this order, to help ensure the development of safe, secure, and trustworthy AI systems, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall:

          (i)   Establish guidelines and best practices, to promote consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including:

               (A)  developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI;

               (B)  developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and

               (C)  launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.

          (ii)  Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.  These efforts shall include:

               (A)  coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and

               (B)  in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order. 

     (b)  Within 270 days of the date of this order, to understand and mitigate AI security risks, the Secretary of Energy, in coordination with the heads of other Sector Risk Management Agencies (SRMAs) as the Secretary of Energy may deem appropriate, shall develop and, to the extent permitted by law and available appropriations, implement a plan for developing the Department of Energy’s AI model evaluation tools and AI testbeds.  The Secretary shall undertake this work using existing solutions where possible, and shall develop these tools and AI testbeds to be capable of assessing near-term extrapolations of AI systems’ capabilities.  At a minimum, the Secretary shall develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.  The Secretary shall do this work solely for the purposes of guarding against these threats, and shall also develop model guardrails that reduce such risks.  The Secretary shall, as appropriate, consult with private AI laboratories, academia, civil society, and third-party evaluators, and shall use existing solutions.

     4.2.  Ensuring Safe and Reliable AI.  (a)  Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require:

          (i)   Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:

               (A)  any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;

               (B)  the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and

               (C)  the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security.  Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and

          (ii)  Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.

     (b)  The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed regularly, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section.  Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:

          (i)   any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

          (ii)  any computing cluster that has a set of machines physically co-located in a single data center, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.

     (c)  Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), as amended by Executive Order 13757 of December 28, 2016 (Taking Additional Steps to Address the National Emergency For Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations concerning foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to:

          (i)    Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”).  Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria outlined in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary.

          (ii)   Include a requirement in the regulations proposed pursuant to subsection 4.2(c)(i) of this section that United States IaaS Providers prohibit any foreign reseller of their United States IaaS Product from providing those products unless such foreign reseller submits to the United States IaaS Provider a report, which the United States IaaS Provider must provide to the Secretary of Commerce, detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run described in subsection 4.2(c)(i) of this section.  Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary.

          (iii)  Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate.  Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 1026 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 1020 integer or floating-point operations per second for training AI.   

     (d)  Within 180 days of the date of this order, under the finding outlined in subsection 4.2(c) of this section, the Secretary of Commerce shall propose regulations that require United States IaaS Providers to ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller.  These regulations shall, at a minimum:

          (i)    Set forth the minimum standards that a United States IaaS Provider must require of foreign resellers of its United States IaaS Products to verify the identity of a foreign person who opens an account or maintains an existing account with a foreign reseller, including:

               (A)  the types of documentation and procedures that foreign resellers of United States IaaS Products must require to verify the identity of any foreign person acting as a lessee or sub-lessee of these products or services;

               (B)  records that foreign resellers of United States IaaS Products must securely maintain regarding a foreign person that obtains an account, including information establishing:

                    (1)  the identity of such foreign person, including name and address;

                    (2)  the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier);

                    (3)  the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and

                    (4)  the Internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account; and

               (C)  methods that foreign resellers of United States IaaS Products must implement to limit all third-party access to the information described in this subsection, except insofar as such access is otherwise consistent with this order and allowed under applicable law;

          (ii)   Take into consideration the types of accounts maintained by foreign resellers of United States IaaS Products, methods of opening an account, and types of identifying information available to accomplish the objectives of identifying foreign malicious cyber actors using any such products and avoiding the imposition of an undue burden on such resellers; and

          (iii)  Provide that the Secretary of Commerce, in accordance with such standards and procedures as the Secretary may delineate and in consultation with the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence, may exempt a United States IaaS Provider with respect to any specific foreign reseller of their United States IaaS Products, or with respect to any specific type of account or lessee, from the requirements of any regulation issued pursuant to this subsection.  Such standards and procedures may include a finding by the Secretary that such foreign reseller, account, or lessee complies with security best practices to otherwise deter abuse of United States IaaS Products.

     (e)  The Secretary of Commerce is hereby authorized to take such actions, including the promulgation of rules and regulations, and to employ all powers granted to the President by the International Emergency Economic Powers Act, 50 U.S.C. 1701 et seq., as may be necessary to carry out the purposes of subsections 4.2(c) and (d) of this section.  Such actions may include a requirement that United States IaaS Providers require foreign resellers of United States IaaS Products to provide United States IaaS Providers verifications relative to those subsections.

     4.3.  Managing AI in Critical Infrastructure and in Cybersecurity.  (a)  To ensure the protection of critical
infrastructure, the following actions shall be taken:

          (i)    Within 90 days of the date of this order, and at least annually thereafter, the head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.  Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments.

          (ii)   Within 150 days of the date of this order, the Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.

          (iii)  Within 180 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of Commerce and with SRMAs and other regulators as determined by the Secretary of Homeland Security, shall incorporate as appropriate the AI Risk Management Framework, NIST AI 100-1, as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators.

          (iv)   Within 240 days of the completion of the guidelines described in subsection 4.3(a)(iii) of this section, the Assistant to the President for National Security Affairs and the Director of OMB, in consultation with the Secretary of Homeland Security, shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action.  Independent regulatory agencies are encouraged, as they deem appropriate, to consider whether to mandate guidance through regulatory action in their areas of authority and responsibility.

          (v)    The Secretary of Homeland Security shall establish an Artificial Intelligence Safety and Security Board as an advisory committee pursuant to section 871 of the Homeland Security Act of 2002 (Public Law 107-296).  The Advisory Committee shall include AI experts from the private sector, academia, and government, as appropriate, and provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.

     (b)  To capitalize on AI’s potential to improve United States cyber defenses:

          (i)    The Secretary of Defense shall carry out the actions described in subsections 4.3(b)(ii) and (iii) of this section for national security systems, and the Secretary of Homeland Security shall carry out these actions for non-national security systems.  Each shall do so in consultation with the heads of other relevant agencies as the Secretary of Defense and the Secretary of Homeland Security may deem appropriate. 

          (ii)   As set forth in subsection 4.3(b)(i) of this section, within 180 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall, consistent with applicable law, each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks.

          (iii)  As set forth in subsection 4.3(b)(i) of this section, within 270 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall each provide a report to the Assistant to the President for National Security Affairs on the results of actions taken pursuant to the plans and operational pilot projects required by subsection 4.3(b)(ii) of this section, including a description of any vulnerabilities found and fixed through the development and deployment of AI capabilities and any lessons learned on how to identify, develop, test, evaluate, and deploy AI capabilities effectively for cyber defense.

     4.4.  Reducing Risks at the Intersection of AI and CBRN Threats.  (a)  To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats — with a particular focus on biological weapons — the following actions shall be taken: 

          (i)   Within 180 days of the date of this order, the Secretary of Homeland Security, in consultation with the Secretary of Energy and the Director of the Office of Science and Technology Policy (OSTP), shall evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats, including, as appropriate, the results of work conducted under section 8(b) of this order.  The Secretary of Homeland Security shall:

               (A)  consult with experts in AI and CBRN issues from the Department of Energy, private AI laboratories, academia, and third-party model evaluators, as appropriate, to evaluate AI model capabilities to present CBRN threats — for the sole purpose of guarding against those threats — as well as options for minimizing the risks of AI model misuse to generate or exacerbate those threats; and

               (B)  submit a report to the President that describes the progress of these efforts, including an assessment of the types of AI models that may present CBRN risks to the United States, and that makes recommendations for regulating or overseeing the training, deployment, publication, or use of these models, including requirements for safety evaluations and guardrails for mitigating potential threats to national security.

          (ii)  Within 120 days of the date of this order, the Secretary of Defense, in consultation with the Assistant to the President for National Security Affairs and the Director of OSTP, shall enter into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct — and submit to the Secretary of Defense, the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, the Director of OSTP, and the Chair of the Chief Data Officer Council — a study that:

               (A)  assesses how AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;

               (B)  considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies, that the United States Government hosts, generates, funds the creation of, or otherwise owns, for the training of generative AI models.

The United States Patent and Trademark Office weighs in as well because there is much intellectual property both in computer hardware and software.

The USPTO has announced a public hearing on August 5, 2024.   To register to watch the livestream of the roundtable: Visit the registration page. There is no charge to view either session, but advance registration is required. You may also ask questions or make comments.  To make a request to be a panelist: Requests to be a panelist at either session of the roundtable must be submitted via email to NILroundtable@uspto.gov. Requestors must indicate in which session they wish to participate and provide their name, professional affiliation, and contact information, along with a short summary of the topics they intend to address. Requests to be a panelist must be submitted by July 31, 2024.




















1..Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House

Monday, July 22, 2024

We Now Have Proof the COVID Vaccines Damage Cognition – Vigilant News Network

The Covid furor is over, the media has extracted as much income from it and is on to politics.

(Addendum)

This article becomes very relevant to recent political events. The Trump-Biden debate illustrates what the Covid vaccinations can cause to mental cognition.

President Biden underwent a series of six Covid vaccinations over a period of three years.





The COVID-19 vaccines excel at causing damage to cognition, and many of us have noticed both subtle and over-cognitive impairment following vaccination that relatively few people know how to address.

• For a long time, the hypothesis that the vaccines impaired cognition was “anecdotal” because it was based on individuals observing it in their peer group or patients.

• Recently large datasets emerged which show this phenomenon is very real and that the severe injuries we’ve seen from the vaccines (e.g., sudden death) are only the tip of the iceberg.

•In this article we will review the proof vaccines are doing this and explore the mechanisms that allow it to happen so we can better understand how to treat it.

 When the COVID-19 vaccines were brought to market, due to their design I expected them to have safety issues, and I expected over the long term, a variety of chronic issues would be linked to them. This was because there were a variety of reasons to suspect they would cause autoimmune disorders, fertility issues, and cancers—but for some reason (as shown by the Pfizer EMA leaks), the vaccines had been exempted from being appropriately tested for any of these issues prior to being given to humans.


Since all new drugs are required to receive that testing, I interpreted it to be a tacit admission it was known major issues would emerge in these areas, and that a decision was made that it was better to just not officially test any of them so there would be no data to show Pfizer “knew” the problems would develop and hence could claim plausible deniability. Sadly, since the time the vaccines entered the market, those three issues (especially autoimmunity) have become some of the most common severe events associated with the vaccines.

At the start of the vaccine rollout, there were four red flags to me:

• The early advertising campaigns for the vaccines mentioned that you would feel awful when you got the vaccine, but that was fine and a sign the vaccine was working. Even with vaccines that had a very high rate of adverse events (e.g., the HPV vaccine), I had never seen this messaging before. This signified it was likely the adverse event rate with the spike protein vaccines would be much higher than normal.

• Many of my colleagues who got the vaccine (since they were healthcare workers they were able to get it first) posted on social media about just how awful they felt after getting the vaccine. This was also something I had never seen with a previous vaccine. After some digging, I noticed those with the worse vaccine reactions typically had already had COVID and had their reaction was to the second shot rather than the first, signifying that some type of increased sensitization was occurring from repeated exposures to the spike protein. Likewise, the published clinical trial about Pfizer’s vaccine also showed adverse reactions were dramatically higher with the second rather than first shot.

• Once the vaccine became available to the general public, I immediately had patients start showing up with vaccine reactions, many of whom stated they received their flu shot each year and never had experienced something similar with a previous vaccination. One of the most concerning things were the pre-exacerbation of autoimmune diseases (e.g., spots in their body they previously would occasionally have arthritis in all felt like they were on fire). After I started looking into this I realized people were seeing between a 15-25% rate of new autoimmune disorders or exacerbations of existing autoimmune disorders developing after the vaccine a massive increase I had never seen any previous vaccine cause.

Note: this was demonstrated by a February 2022 Israeli survey which showed 3% of vaccine recipients experienced a new autoimmune disorder and that 15% experienced an exacerbation of a pre-existing one, a rheumatologic database published in the BMJ that found 4.4% of recipients experienced an exacerbation of a pre-existing autoimmune disease, and a survey by a private physician of 566 patients which found vaccination spiked their inflammatory markers, causing their five year risk of a heart attack to go from 11% to 25%.

• About a month after the vaccines were available to the public, I started having friends and patients share that they’d known someone who had unexpectedly died suddenly after receiving the vaccine (typically from a heart attack, stroke, or a sudden aggressive case of COVID-19).

This was also extremely concerning to me, because reactions to a toxin typically distribute on a bell curve, with the severe ones being much rarer than the moderate ones. This meant that if that many severe reactions were occurring, what I could already see was only the tip of the iceberg and far, far less obvious reactions were going to be happening, to the point it was likely many people I knew would end up experiencing complications from the vaccine.


Note: the above graph is only

A Parent’s Guide to Internet Safety for Kids

 

Many studies reveal how the internet affects the mental health of children.

The impact of the internet on the mental health of children is a complex and nuanced issue. Here are some key ways the internet can affect children's mental health:


There are some positive as well as negative impacts.

Positive impacts:

Access to information and educational resources can support learning and development. Opportunities for social connection and community building can reduce isolation. Exposure to diverse perspectives and ideas that can broaden horizons. Outlets for self-expression and creativity.

Negative impacts:

Excessive screen time and internet use can disrupt sleep, physical activity, and face-to-face social interaction. Exposure to cyberbullying, trolling, and other harmful online behaviors can contribute to anxiety, depression, and low self-esteem. Access to inappropriate or disturbing content that may be psychologically distressing. Social comparison and feelings of inadequacy from seeing idealized content on social media.  Addiction-forming behaviors from excessive internet and social media use. The degree of impact often depends on factors like the child's age, maturity, mental health, and parental mediation and guidance. While the Internet provides many benefits, it's important for parents and caregivers to monitor and moderate children's online activities to mitigate potential mental health risks. A balanced approach that promotes healthy online and offline activities is generally recommended.

Teaching kids about Internet safety

In today's world, the pace of change is rapid, and parents cannot keep up.  On the other hand, our children thrive on new technology and social media. 

If you want to learn more about technology, the internet, and computers I recommend this website

Stay sane and keep those little ones safe.  Courtesy of  Nicole You may reach her at  nicole@onlinesafetymasters.com

ref:

Title: "The Impact of Digital Technology on Children's Well-Being"Authors: Przybylski, A. K., & Weinstein, N.Journal: Pediatrics, 140(Supplement 2), S86-S91. Publication Date: 2017

Title: "The Impact of Social Media Use on Children's and Adolescents' Mental Health" Authors: Hollis, C., Livingstone, S., & Sonuga-Barke, E.Journal: The Lancet Child & Adolescent Health, 4(5), 337-343. Publication Date: 2020 Key Findings:

 Authors: Hale, L., & Guan, S.Journal: Child Development, 86(1), 45-63.Publication Date: 2015 Key Findings:

 Title: "The Impact of Excessive Internet and Smartphone Use on Mental Health: A Systematic Review" Authors: Király, O., Potenza, M. N., Stein, D. J., King, D. L., Hodgins, D. C., Saunders, J. B., ... & Demetrovics, Z.Journal: Journal of Behavioral Addictions, 9(4), 1031-1064. 2020Publication Date:  

If you want to read more about this issue. 

https://zentrointernet.com/a-parents-guide-to-internet-safety-for-kids/

Thursday, July 18, 2024

Is Everything Health Care? The Overblown Social Determinants of Health | Manhattan Institute

In November 2023, the Biden White House released the 53-page U.S. Playbook to Address Social Determinants of Health and declared: “Improving health and well-being across America requires addressing the social circumstances and related environmental hazards and exposures that improve health outcomes.” It accepted the Department of Health and Human Services’ definition of SDOH as “the conditions in the environments where people are born, live, learn, work, play, worship, and age that affect a wide range of health, functioning, and quality-of-life outcomes and risks.”[1]


The same month as the
 Playbook’s release, federal regulations revising Medicare payments to physicians declared that “around 50 percent of an individual’s health is directly related to SDOH.”[2] Others have suggested that 40% to 90% of health outcomes are attributable to social, behavioral, or economic factors.[3]

These claims are based on some stark facts. Low-income Americans have higher rates of disability, anxiety, heart disease, stroke, diabetes, and other chronic conditions, and they are more subject to obesity, substance abuse, physical strain, and environmental pollutants.[4] From 2001 to 2014, the life expectancy of the richest 1% of Americans averaged 15 years longer than that of the poorest 1%.[5] Furthermore, poorer social classes have worse health outcomes even when they receive the same access to medical care.

Possible sources of SDOH influence

Neighborhoods

Air and water quality

Hazards (lead paint, vermin, mold, dust, infectious disease)

Service availability (schools, transportation, medical care, employment)

Education 

Knowledge of healthy behaviors

Employment opportunities (conditions, compensation)

Economic

Personal income and wealth

Workplace safety (injuries, chemical exposure, repetitive strain)

Work pressures (stress, sleep, social support, financial anxiety)

Social relations

Social harmony (crime, violence, anxiety, social trust)

Racial disadvantages (discrimination, prejudice, animosities)

Community ties (social status, social networks)

Cultural pressures (substance use, illegal activity, diet, exercise)

Feedback cycle between ill-health and poverty

In addition, low-income areas are often healthcare deserts, where there are no providers, or pharmacies, and poor transportation.





Is Everything Health Care? The Overblown Social Determinants of Health | Manhattan Institute

Friday, July 5, 2024

Alzheimer's: AI tool may help predict risk with almost 80% accuracy




Continuing our series of diagnosing and treatment of Alzheimer's disease, the Health Train Express research team discovered this new knowledge base for A.D.

Alzheimer's disease (AD) is the most common cause of dementia and has a long prodromal phase, during which subtle cognitive changes occur. Mild cognitive impairment (MCI) is a stage between normal cognition and AD. Individuals with MCI are at higher risk of developing AD with a 3% to 15% conversion rate of MCI to AD every year.12 Therefore, accurately predicting the progression of MCI to AD can assist physicians in making decisions regarding patient treatment, participation in cognitive rehabilitation programs, and selection for clinical trials involving new drugs.3

Dementia directly affects more than 55 million peopleTrusted Source worldwide, and up to 70% of those people have Alzheimer’s disease, which is characterized by a loss of brain cells associated with the toxic buildup of two proteins, amyloidTrusted Source and tauTrusted Source.

The most common symptoms of Alzheimer’s disease are memory loss, cognitive deficits, problems with speaking, recognition, spatial awareness, reading, or writing, and significant changes in personality and behavior. Since Alzheimer’s is progressive, these symptoms are usually mild at first and tend to become more severe over time. With no cure for the disease, patients and caregivers must approach treatment with medication, lifestyle changes, and support groups. 
AI model may predict Alzheimer’s by analyzing speech patterns


  • Researchers at Boston University say they have designed an artificial intelligence tool that can predict with nearly 80% accuracy whether someone is at risk for developing Alzheimer’s disease based on their speech patterns.
  • The ability to identify potential cognitive decline early has significant potential for mitigating the progression of Alzheimer’s, experts say.
  • However, the sample size used was small, and experts caution that such a tool is not meant to be leaned on as an exclusive method.

  This is from the Alzheimer's Disease Association in the Journal of Alzheimer's

INTRODUCTION

Identification of individuals with mild cognitive impairment (MCI) who are at risk of developing Alzheimer's disease (AD) is crucial for early intervention and selection of clinical trials.

METHODS

We applied natural language processing techniques along with machine learning methods to develop a method for automated prediction of progression to AD within 6 years using speech. The study design was evaluated on the neuropsychological test interviews of n = 166 participants from the Framingham Heart Study, comprising 90 progressive MCI and 76 stable MCI cases.

RESULTS

Our best models, which used features generated from speech data, age, sex, and education level, achieved an accuracy of 78.5% and a sensitivity of 81.1% to predict MCI-to-AD progression within 6 years.

DISCUSSION

The proposed method offers a fully automated procedure, providing an opportunity to develop an inexpensive, broadly accessible, and easy-to-administer screening tool for MCI-to-AD progression prediction, facilitating the development of remote assessment.

Highlights

  • Voice recordings from neuropsychological exams coupled with basic demographics can lead to strong predictive models of progression to dementia from mild cognitive impairment.
  • The study leveraged AI methods for speech recognition and processed the resulting text using language models.
  • The developed AI-powered pipeline can lead to a fully automated assessment that could enable remote and cost-effective screening and prognosis for Alzheimer's disease.

The prevalence of AD increases annually due to the long-term survival of patients, and it produces an increasing expense for long-term care.

Detecting Alzheimer's Disease at an early stage may improve possible treatments.

Detecting early AD could Improve the quality of life for patients and caregivers.

Clinical trials for AD are ongoing by the NIH and have yielded mixed results.

Treatment with either gantenerumab or solanezumab, two monoclonal antibodies, did not slow down cognitive decline in people who have a type of early-onset dementia called dominantly inherited Alzheimer’s disease (DIAD), according to a recent study. However, gantenerumab did reduce some biomarkers of the disease. The study, which was funded in part by NIA, was published in Nature Medicine on June 21. DIAD is a rare form of Alzheimer’s disease. It is an inherited condition caused by mutations in certain genes. People who have DIAD often start having symptoms of dementia, such as confusion and problems with memory, reasoning, and judgment, between the ages of 30 and 50. Currently, there is no treatment to prevent or slow down the disease. In the study, researchers led by a team at Washington University School of Medicine in St. Louis tested whether gantenerumab or solanezumab can effectively treat this condition. This study was part of the Dominantly Inherited Alzheimer Network Trials Unit (DIAN-TU). 

Biomarkers for AD

Cerebrospinal fluid (CSF) biomarkers, developed first by Fujirebio more than 25 years ago, have evolved over time from research to specialized diagnostic testing, and from use by early adopters to widespread routine testing today.

Assessing a patient’s CSF allows the detection of four proteins associated with Alzheimer’s disease: two forms of beta-amyloid (Aβ1-42 and Aβ1-40) proteins and two forms of Tau (Total Tau and hyperphosphorylated-Tau) proteins. In Alzheimer’s disease amyloid proteins (Aβ1-42 and Aβ1-42/Aβ1-40 ratio) decline to abnormally low levels. The Aβ1-42 levels and the Aβ1-42/Aβ1-40 ratio may decline long before disease symptoms are manifested. Although still considered a research tool in the US, high total Tau and phospho-Tau levels may also be observed in Alzheimer’s Disease.


There is an urgent clinical need for low-invasive, affordable techniques to assess the risk of Alzheimer's pathology in patients, that could help refer individuals to specialists for confirmatory testing. Plasma-based biomarkers for the two amyloid (Aβ1-42 and Aβ1-40) proteins and for the phospho-Tau protein are already available from Fujirebio for fully automated testing (for research use only). Research continues at the Fujirebio Neurology Center of Excellence, which has led, for example, to the release of new assays from CSF for two promising biomarkers, NPTX2 and sTREM2, and amyloid, both of which are available for research use.  Amyloid is an abnormal protein composed of peptides or peptide fragments.

Amyloid Protein Structure

Histopathology of Amyloid Protein in Brain

MMP-9 is associated with Alzheimer's. MMP can be found in CSF (and/or plasma, Ca clear, colorless fluid that surrounds and protects the brain and spinal cord of vertebratesI. t'Several lines of evidence indicate that there may be an inflammatory component to the pathology of AD. Matrix metalloproteinases (MMPs) remodel the pericellular environment by regulating the cleavage of extracellular matrix proteins, cell surface components, neurotransmitter receptors, and growth factors. The confused abilities of several MMPs to degrade amyloid precursor protein (APP) lead to aggregation of Aβ, as well as the increased expression of MMPs in postmortem brain tissue of AD patients, indicating that MMPs play an important role in the pathogenesis of AD. Their activities are determined through the induction of transcription by inflammatory mediators, through posttranslational modification by free radicals or cytokines and through inhibitory proteins such as tissue inhibitors of metalloproteinases (TIMPs) [].
 roduced by ependymal cells in the brain's ventricles and flows through the subarachnoid spaces of the cranium and spine. CSF has several important functions, including:

Ongoing. Clinical Trials and Research can be found on Pubmed and ClinicalTrials.gov 

The use of PET scans and history combined can afford more specific possibilities.  A positron emission tomography (PET) scan is a nuclear medicine procedure that uses a scanner to create detailed images of the body. The scan measures metabolic activity in body tissues by injecting a small amount of radioactive glucose (sugar) into a vein. The patient then lies on a table that slides through the PET machine while wearing a headrest and white strap to help keep them still. The scanner creates pictures of areas inside the body where the glucose is taken up, with brighter spots indicating higher activity. 

Research is ongoing by many pharma companies. Biogen, Eli Lilly, and Eisai: Three Companies That May Ride the Advancement in Alzheimer's.




Thursday, July 4, 2024

Preventing Alzheimer's Disease

In medical school, we aren’t taught that there are ways to prevent Alzheimer’s disease, when, in fact, we have a lot of power over the direction our brain function will take. However, we’re now learning that the disease starts in the brain 20 to 30 years before the first signs of memory loss! ⁠

This is why we should all be thinking about prevention. ⁠

Here’s the bad news/good news.

Eating sugar and refined carbs can cause pre-dementia and dementia. But cutting out the sugar and refined carbs and adding lots of fat can prevent, and even reverse, pre-dementia and early dementia. Sugar causes pre-diabetes and diabetes, which often leads to significant memory loss.

Chronic stress takes a toll on your body and brain. Stress shrinks the hippocampus, the memory center of the brain. So, find your pause button daily and make time for stress relief. Relaxation isn’t a luxury if you want to prevent or reverse dementia. Whether that involves deep breathing, meditation, or yoga, find something that helps you calm down.

Lack of sleep can cause impaired brain function, leading to CRAFT syndrome, which stands for “can’t remember a _____ thing.” Studies show poor sleep becomes a risk factor for cognitive decline and Alzheimer’s disease. Aim for at least 8 hours of quality sleep every night.

We now know that physical activity can prevent and even slow down the progression of cognitive decline and brain diseases like dementia. Even a 30-minute walk can help. You might want to incorporate high-intensity interval training or weight lifting if you're already more active.

Monday, July 1, 2024

In a Very Important Ruling a Judge strikes down parts of HIPPA

With the advent of Digital Health Information and Electronic Health Records, Congress passed the Health Information Portability and Privacy Act (HIPPA) all members of the health information niche must realize the importance of patient privacy. Each entity signs an agreement that includes the restrictions on health data.  This applies to insurance companies, pharmacies, and any entity which has access to the electronic health care record. Interoperability allows all institutions which need patient health data. This may be provided in a proprietary exchange, such as EPIC, or a disparate data set from other multiple vendors. In order to be certified for interoperability each vendor must be audited by an interoperability certification company

In a nutshell, the ruling decided that hospitals can leak patient data to Meta, TikTok, and other third parties via adtech-installed patient portals, and HHS does not have the authority to enforce HIPAA via the bulletin banning surveillance trackers in December 2022.

This decision has far-reaching implications, especially for patients who rely on robust safeguards for their health information, or who have concerns about patient safety/privacy. The case originated from a wave of federal and state class action lawsuits for leaking data from hospital patient portals. As we navigate this new landscape, it’s crucial to understand what this means for patient privacy and what actions we can take to ensure our voices are heard.

Introduction

  • The Standards for Privacy of Individually Identifiable Health Information ("Privacy Rule") establishes, for the first time, a set of national standards for the protection of certain health information. The U.S. Department of Health and Human Services ("HHS") issued the Privacy Rule to implement the requirement of the Health Insurance Portability and Accountability Act of 1996 ("HIPAA").1 The Privacy Rule standards address the use and disclosure of individuals' health information—called "protected health information" by organizations subject to the Privacy Rule — called "covered entities," as well as standards for individuals' privacy rights to understand and control how their health information is used. Within HHS, the Office for Civil Rights ("OCR") has responsibility for implementing and enforcing the Privacy Rule with respect to voluntary compliance activities and civil money penalties.

    A major goal of the Privacy Rule is to assure that individuals' health information is properly protected while allowing the flow of health information needed to provide and promote high quality health care and to protect the public's health and well being. The Rule strikes a balance that permits important uses of information, while protecting the privacy of people who seek care and healing. Given that the health care marketplace is diverse, the Rule is designed to be flexible and comprehensive to cover the variety of uses and disclosures that need to be addressed.

Who is Covered by the Privacy Rule

The Privacy Rule, as well as all the Administrative Simplification rules, apply to health plans, health care clearinghouses, and to any health care provider who transmits health information in electronic form in connection with transactions for which the Secretary of HHS has adopted standards under HIPAA (the "covered entities"). For help in determining whether you are covered, use CMS's decision tool.

Health Plans. Individual and group plans that provide or pay the cost of medical care are covered entities.4 Health plans include health, dental, vision, and prescription drug insurers, health maintenance organizations ("HMOs"), Medicare, Medicaid, Medicare+Choice and Medicare supplement insurers, and long-term care insurers (excluding nursing home fixed-indemnity policies). Health plans also include employer-sponsored group health plans, government and church-sponsored health plans, and multi-employer health plans. There are exceptions—a group health plan with less than 50 participants that is administered solely by the employer that established and maintains the plan is not a covered entity. Two types of government-funded programs are not health plans: (1) those whose principal purpose is not providing or paying the cost of health care, such as the food stamps program; and (2) those programs whose principal activity is directly providing health care, such as a community health center,5 or the making of grants to fund the direct provision of health care. Certain types of insurance entities are also not health plans, including entities providing only workers' compensation, automobile insurance, and property and casualty insurance. If an insurance entity has separable lines of business, one of which is a health plan, the HIPAA regulations apply to the entity with respect to the health plan line of business.

Health Care Providers. Every health care provider, regardless of size, who electronically transmits health information in connection with certain transactions, is a covered entity. These transactions include claims, benefit eligibility inquiries, referral authorization requests, or other transactions for which HHS has established standards under the HIPAA Transactions Rule.6 Using electronic technology, such as email, does not mean a health care provider is a covered entity; the transmission must be in connection with a standard transaction. The Privacy Rule covers a health care provider whether it electronically transmits these transactions directly or uses a billing service or other third party to do so on its behalf. Health care providers include all "providers of services" (e.g., institutional providers such as hospitals) and "providers of medical or health services" (e.g., non-institutional providers such as physicians, dentists and other practitioners) as defined by Medicare, and any other person or organization that furnishes, bills, or is paid for health care.

Health Care Clearinghouses. Health care clearinghouses are entities that process nonstandard information they receive from another entity into a standard (i.e., standard format or data content), or vice versa.7 In most instances, health care clearinghouses will receive individually identifiable health information only when they are providing these processing services to a health plan or health care provider as a business associate. In such instances, only certain provisions of the Privacy Rule are applicable to the health care clearinghouse's uses and disclosures of protected health information.8 Health care clearinghouses include billing services, repricing companies, community health management information systems, and value-added networks and switches if these entities perform clearinghouse functions.

Business Associates

Business Associate Defined. In general, a business associate is a person or organization, other than a member of a covered entity's workforce, that performs certain functions or activities on behalf of, or provides certain services to, a covered entity that involve the use or disclosure of individually identifiable health information. Business associate functions or activities on behalf of a covered entity include claims processing, data analysis, utilization review, and billing.9 Business associate services to a covered entity are limited to legal, actuarial, accounting, consulting, data aggregation, management, administrative, accreditation, or financial services. However, persons or organizations are not considered business associates if their functions or services do not involve the use or disclosure of protected health information, and where any access to protected health information by such persons would be incidental, if at all. A covered entity can be the business associate of another covered entity.

Business Associate Contract. When a covered entity uses a contractor or other non-workforce member to perform "business associate" services or activities, the Rule requires that the covered entity include certain protections for the information in a business associate agreement (in certain circumstances governmental entities may use alternative means to achieve the same protections). In the business associate contract, a covered entity must impose specified written safeguards on the individually identifiable health information used or disclosed by its business associates.10 Moreover, a covered entity may not contractually authorize its business associate to make any use or disclosure of protected health information that would violate the Rule. Covered entities that had an existing written contract or agreement with business associates prior to October 15, 2002, which was not renewed or modified prior to April 14, 2003, were permitted to continue to operate under that contract until they renewed the contract or April 14, 2004, whichever was first.11 See additional guidance on Business Associates and sample business associate contract language.