Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Patient Safety Alerts
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 179 results
  1. Content Article
    Recent developments in artificial intelligence (AI) have sparked hope that this technology can play a significant role in helping the NHS tackle current pressures, as well as drive longer-term service transformation. But despite a range of important work on AI underway within the NHS, government and a wide array of other organisations, current efforts to harness AI in health care risk being hampered by the lack of an overarching strategy and lack of coordination among the various actors. The government and NHS leaders must develop a dedicated strategy for AI in health care. This report from the Nuffield Trust presents six key priorities policymakers and health care leaders must address through such a strategy if the benefits of AI are to be realised: meaningful public and staff engagement; effective priority setting; data and digital infrastructure that is fit for purpose; high-quality testing and evaluation; clear and consistent regulation; and the right workforce skills and capabilities. 
  2. Content Article
    Diagnostic error is largely discovered and evaluated through self-reporting and manual review, which is costly and not suitable for real-time intervention. AI presents new opportunities to use electronic health record data for automated detection of potential misdiagnosis, executed at scale and generalised across diseases. The authors of this study propose a new, automated approach to identifying diagnostic divergence considering both diagnosis and risk of mortality. The aim of this study was to identify cases of misdiagnosis of infectious disease in the emergency department by measuring the difference between predicted diagnosis and documented diagnosis, weighted by mortality. Two machine learning models were trained for prediction of infectious disease and mortality using the first 24 hours of data. Charts were manually reviewed by clinicians to determine whether there could have been a more correct or timely diagnosis.
  3. Content Article
    This increased implementation of artificial intelligence (AI) in healthcare could be either great or terrible news for the safety of services, depending on how organisations develop and implement it. This blog, written by the Professional Record Standards Body in partnership with the user experience company HD Labs, looks at the safety risks associated with using AI in health and care and outlines how standards can help keep AI safe.
  4. Content Article
    Use of artificial intelligence (AI) in healthcare is on the rise. Bodies including UK Governments, the National Institute for Health and Care Research and the NHS AI Lab are all investing in developing and deploying the technology.  The Patient Information Forum (PIF) is an independent UK membership body for people working in health information and support. Developed in collaboration with PIF’s AI working group, this position statement aims to help members understand the AI landscape and how to manage it.
  5. News Article
    A simple blood test using artificial intelligence to predict Parkinson's disease years before symptoms begin has been developed by researchers. They hope it can lead to a cheap, finger-prick test providing early diagnoses - and help find treatments to slow down the disease. Charity Parkinson's UK said it was "a major step forward" in the search for a non-invasive patient-friendly test, but larger trials are needed to prove its accuracy. “At present we are shutting the stable door after the horse has bolted," senior author Prof Kevin Mills, from UCL's Great Ormond Street Institute of Child Health, said. "We need to start experimental treatments before patients develop symptoms." Co-author Dr Jenny Hällqvist, from UCL, said: "People are diagnosed when neurons are already lost. "We need to protect those neurons, not wait till they are gone." Read full story Source: BBC News, 18 June 2024
  6. News Article
    Nurses in the United States continue to voice concerns about artificial intelligence and its integration into electronic health records (EHR), saying the technology is ineffective and interferes with patient care. Nurses from health systems around the country spoke to National Nurses United, their largest labor union, about issues with such programmes as automated nurse handoffs, patient classification systems and sepsis alerts. Multiple nurses cited problems with EHR-based programs from Epic and Oracle Health that use algorithms to determine patient acuity and nurse staffing levels. "I don't ever trust Epic to be correct," Craig Cedotal, RN, a paediatric oncology nurse at Kaiser Permanente Oakland (Calif.) Medical Center, told the nurses' union. "It's never a reflection of what we need, but more a snapshot of what we've done." He said the technology does not account for the hours of preparing and double-checking the accuracy of chemotherapy treatments before a pediatric patient even arrives at the hospital. Read full story Source: Becker's Health IT, 14 June 2024
  7. Content Article
    Healthcare has become increasingly dependent on, and supported by, technology and digital solutions. We've pulled together some key pieces of hub content to help readers take a closer look at some of the patient safety considerations.
  8. Content Article
    This poster from presents preliminary data from a proof-of-concept examining the use of artificial intelligence technology, which can aid medical staff in locating, automatically reporting and effectively classifying safety incidents
  9. Content Article
    The relentless increase in administrative responsibilities, amplified by electronic health record (EHR) systems, has diverted clinician attention from direct patient care, fuelling burnout. In response, large language models (LLMs) are being adopted to streamline clinical and administrative tasks. Notably, Epic is currently leveraging OpenAI's ChatGPT models, including GPT-4, for electronic messaging via online portals. The volume of patient portal messaging has escalated in the past 5–10 years, and general-purpose LLMs are being deployed to manage this burden. Their use in drafting responses to patient messages is one of the earliest applications of LLMs in EHRs. Previous works have evaluated the quality of LLMs responses to biomedical and clinical knowledge questions; however, the ability of LLMs to improve efficiency and reduce cognitive burden has not been established, and the effect of LLMs on clinical decision making is unknown. To begin to bridge this knowledge gap, the authors of this study, published in the Lancet, carried out a proof-of-concept end-user study assessing the effect and safety of LLM-assisted patient messaging.
  10. News Article
    The first NHS AI-run physiotherapy clinic is to be rolled out this year in an effort to cut waiting times amid growing demand and staff shortages. The new platform will provide same-day automated video appointments with a digital physiotherapist via an app that responds to information provided by a patient in real time. It is the first platform of its kind to be approved by the health regulator, the Care Quality Commission, as a registered healthcare provider. Patients seeking physiotherapy for issues such as back pain can be referred to the platform Flok Health through a community or primary care healthcare setting, such as their GP. They can also self-refer directly into the service. The service aims to provide faster care and reduce waiting times and pressure on clinicians, those behind it say. However, some in the industry say that AI cannot yet replicate the skill of a fully trained physiotherapist, and that treatment needs to be nuanced due to the complexity of cases. CSP health informatics lead, Euan McComiskie, said of the AI clinic: “There is no doubt that more needs to be done to tackle huge NHS waiting lists, particularly for musculoskeletal services and AI has huge potential to be an adjunct to the work of physiotherapists. However, AI cannot yet replicate the clinical judgment and skills of a physiotherapist, who is required to be registered with a statutory regulator, the Health and Care Professions Council (HCPC).” McComiskie added that physiotherapists manage “increasing complexity in patient presentation and their treatment needs to be individually tailored”. He said: “It is early days to know how much AI can eventually provide clinical decision making and more research is needed … but not at the cost of patient access, safety, experience nor trust.” Read full story Source: The Guardian, 9 June 2024
  11. News Article
    The United State's largest nurses union is demanding that artificial intelligence tools used in healthcare be proven safe and equitable before deployment. Those that aren’t should be immediately discontinued, the union says. Few algorithms, if any, currently meet their standard. “These arguments that these AI tools will result in improved safety are not grounded in any type of evidence whatsoever,” Michelle Mahon, assistant director of nursing practice at National Nurses United, told Fierce Healthcare. NNU represents 225,000 nurses in the US and has a presence in nearly every state through affiliated organisations, like the California Nurses Association, which protested the use of AI in healthcare in late April. NNU nurses also represent nearly every major hospital and health system in the nation. Most AI nurses interact with is integrated into electronic health records and is often used to predict sepsis or determine patient acuity, union nurses said at an NNU media briefing last month. EHRs cause an estimated 30,000 deaths per year, which is the third leading cause of death in the nation, Mahon said. Adding what they call “unproven” algorithms to EHRs is not how the health system should be spending dollars, NNU says. The union is demanding that all AI used in healthcare meet the precautionary principle, a philosophical approach that requires the highest level of protection for innovations without significant scientific backing. Any AI solution that does not meet this principle, which NNU claims is most of the AI currently on the market and deployed in hospitals, should be immediately discontinued, they say. Read full story Source: Fierce Healthcare, 3 June 2024
  12. Content Article
    In this Forbes article, Robert Pearl MD looks at how AI will affect the legal situation when a patient is harmed in healthcare. He highlights growing confidence and an increasing body of research that points to generative AI being able to outperform medical professionals in various clinical tasks. However, he outlines many questions that still remain about the legal implications of using AI in healthcare. He also argues that liability will become increasingly complex, especially in places where AI is being used without direct individual oversight.
  13. News Article
    A new artificial intelligence tool (AI) developed in the UK can rapidly rule out heart attacks in people attending A&E and help tens of thousands avoid unnecessary hospital stays each year, according to its creators. Known as Rapid-RO, the AI tool has been found to successfully rule out heart attacks in over a third of patients across four UK hospitals during trials. Professor James Leiper, associate medical director at the British Heart Foundation (BHF), which funded the study, said: “This research demonstrates the important role AI could play in guiding treatment decision for heart patients. “By quickly identifying patients who are safe to be discharged, this technology could help people avoid unnecessary hospital stays, allowing valuable NHS time and resource to be redirected to where it could have the greatest benefit.” Read full story Source: The Independent, 3 June 2024
  14. Content Article
    This seminal study by Cabral et al delves into the transformative potential of artificial intelligence (AI) in oncology, highlighting its pivotal role in enhancing healthcare quality and safety. The study aligns with the broader discourse on AI’s capacity to revolutionise healthcare outcomes, drawing from insights previously proposed on the synergy between human expertise and AI across various medical disciplines.
  15. Content Article
    In January 2024, the Institute for Healthcare Improvement (IHI) Lucian Leape Institute convened an expert panel to explore the promise and potential risks for patient safety from generative artificial intelligence (genAI). This report is based on the expert panel’s review and discussion.
  16. News Article
    An artificial intelligence (AI) system that sends text messages to alert hospital physicians about the high risk for mortality in their patients reduces the number of deaths, according to a study published in Nature Medicine. Chin-Sheng Lin, PhD, associate professor of cardiology at the Tri-Service General Hospital of the National Defense Medical Center in Taipei, Taiwan, and his colleagues have developed an AI system that identifies patients with a high risk for mortality on the basis of a 12-lead ECG. The system is intended to identify patients who would benefit from intensified care. "It is widely acknowledged that providing intensive care to critically ill patients reduces mortality. Delays in providing intensive care for critically ill patients result in catastrophic outcomes. Most in-hospital cardiac arrests are potentially preventable; however, the early signs of deterioration might be difficult to identify," wrote the researchers. The authors emphasized that exactly how the AI warning messages lead to a decrease in overall mortality must still be clarified. But the results suggest that they help in detecting high-risk patients, triggering timely clinical care, and reducing mortality, they wrote. Read full story Source: Medscape, 21 May 2024
  17. Content Article
    This study in Surgery aimed to investigate the accuracy of ChatGPT-4’s surgical decision-making compared with general surgery residents and attending surgeons. Five clinical scenarios were created from actual patient data based on common general surgery diagnoses. Scripts were developed to sequentially provide clinical information and ask decision-making questions. Responses to the prompts were scored based on a standardised rubric for a total of 50 points. Each clinical scenario was run through Chat GPT-4 and sent electronically to all general surgery residents and attendings at a single institution. Scores were compared using Wilcoxon rank sum tests. The results showed that, when faced with surgical patient scenarios, ChatGPT-4 performed superior to junior residents and equivalent to senior residents and attendings. The authors argue that large language models, such as ChatGPT, may have the potential to be an educational resource for junior residents to develop surgical decision-making skills.
  18. Content Article
    Large language models (LLMs) are a form of artificial intelligence that can generate human-like text and functions as a form of an input–output machine. They bring great potential to help the healthcare industry centre care around patients’ needs by improving communication, access and engagement. However, LLMs also present significant challenges associated with privacy and bias that also must be considered. This blog looks at three major patient-care advantages of LLMs, as well as the potential risks associated with using them in healthcare.
  19. Content Article
    In this blog, Miqdad Asaria, Assistant Professor at the Department of Health Policy at LSE, argues that AI could lead to a paradigm-shift in healthcare systems likes the NHS. He outlines how AI could help personalise medical treatments, enhance research and development of new drugs and help with the administrative burden currently undermining the productivity and efficiency of healthcare providers.
  20. Content Article
    In an increasingly global healthcare environment, with patients and professionals from many different cultural and linguistic backgrounds, precision in medical document translation is key. Medical documents can range from patient records, patient information leaflets, consent forms, prescriptions, treatment plans to research papers. The translator must have a thorough understanding of the source text and subject matter in order to produce a high-quality target document and ensure patients receive accurate information. But this can come with patient risk, if not done properly. In this blog, Melanie Cole, Translations Coordinator at EIDO Systems International, talks about the challenges, risks and opportunities for using AI in healthcare translation. 
  21. Content Article
    This policy paper provides on update on the Medicines and Healthcare products Regulatory Agency's (MHRA’s) use of AI as a regulator of AI products, as a public service organisation delivering time-critical decisions and as an organisation making evidence-based decisions that impact public safety.
  22. Content Article
    In this article, Radar Healthcare provides a summary of the main sessions, messages and themes emerging from the Care Show London and the Digital Healthcare Show 2024, which both took place in April 2024. It discusses these topics: Embracing technology in care provision Mastering CQC-ready feedback processes The importance of integration between social care and the NHS Leveraging social media AI: The challenges and opportunities Avoiding digital fatigue:  Fostering patient safety In this final section, the article highlights a presentation given by Patient Safety Learning's Chief Executive Helen Hughes and Chief Digital Officer Clive Flashman about the organisation's patient safety standards. They spoke about the standards and accompanying online patient safety assessment toolkit, an easy-to-use resource designed to help organisations establish clearly defined patient safety aims and goals, support their delivery and demonstrate achievement. The article also highlights the contribution of the hub to improving patient safety, saying, "Patient Safety Learning's platform is recognised for its excellence in sharing knowledge on patient safety. It provides a comprehensive suite of tools, resources, case studies, and best practices to support those striving to improve patient care."
  23. Content Article
    The debate about fairness of artificial intelligence (AI) in health care is gaining momentum. At present, the focus of the debate is on identifying unfair outcomes resulting from biased algorithmic decision making. This article in The Lancet Digital Health looks at the ethical principles guiding outcome fairness in AI algorithms.
  24. Event
    until
    The theme for this year's The Private Healthcare Information Network (PHIN) forum underscores the pivotal role that advancements in AI, robotics, and data play in shaping the future of healthcare for the benefit of patients and everyone involved in the sector. The event is free to attend, but you need to book to guarantee your place. Register for the event
  25. Content Article
    Surgical Site Infections (SSIs) can have subtle, early signs that are not readily identifiable. This study aimed to develop a machine learning algorithm that could identify early SSIs based on thermal images. Images were taken of surgical incisions on 193 patients who underwent a variety of surgical procedures, but only five of these patients developed SSIs, which limited testing of the models developed. However, the authors were able to generate two models to successfully segment wounds. This proof-of-concept demonstrates that computer vision has the potential to support future surgical applications.
×
×
  • Create New...