We impact the lives of children in Africa through technology

Date: 31 January 2020

We’re incredibly excited to announce that we’ve joined the Venture Scale programme at Founders Factory Africa - kicking off today. Over the next 6 months, we’ll be teaming up with FFA and Netcare to grow our business and scale across the continent!

We at Envisionit Deep AI are looking forward to working together with Founders Factory Africa and Netcare Group for the next 6 months and beyond, unlocking incredible opportunities and democratising access to healthcare for all.

Recognition by the FFA and Netcare as well as acceptance into the Venture Scale programme further reinforces innovation and relevance of the great work that we’re doing at Envisionit Deep AI. This amazing partnership, we’re certain, will allow all the parties to reap the benefits of continued building, refinement and deployment of EDAI’s pattern recognition algorithms and models into South African as well as African hospitals and radiology practices.

Date: 2 January 2020

AI 'outperforms' doctors diagnosing breast cancer.Read more here

Below is a table comparing the Assessment Criteria for Radiology Research On Artificial Intelligence and Google's Breast Cancer Screening

Assesment Criteria Google's Breast Cancer Screening paper
“Assessing Radiology Research On Artificial Intelligence: A Brief Guide For Authors, Reviewers, And Readers—From Radiology Editorial Board | Radiology”- 2020. Radiology “International Evaluation Of An AI System For Breast Cancer Screening”- 2020. Nature
1 Carefully define all three image sets (training, validation, and test sets of the images) of the AI experiment YES: Detailed in the supplementary materials with STARD diagrams for both UK and US datasets
2 Use an external test set for final statistical reporting. NO: Test sets derived from same sources as training and validation
3 Use multivendor images, preferably for each phase of the AI evaluation (training, validation, test sets) NO: 95% from one vendor (Hologic), General Electric (4%), Siemens (1%) - no breakdown of per vendor performance.
4 Justify the size of the training, validation, and test sets. NO: Although authors state that test set size was chosen to ”increase statistical power”, no powering calculation given.
5 Train the AI algorithm using a standard of reference that is widely accepted in our field. YES: Histopathology/ clinical follow-up ground truthing
6 Describe any preparation of images for the AI algorithm. NO: No description of DICOM image preparation (even if none)
7 Benchmark the AI performance to radiology experts. YES: Breakdown of experience given in Extended Data Table 7. Not all US radiologists were fellowship trained, 33 unknown UK readers.
8 Demonstrate how the AI algorithm makes decisions. YES: Description and diagrams in the supplementary materials for algorithm binary decisions and localizations.
9 The AI algorithm should be publicly available so that the claim of performance can be verified. NO: Neither publicly nor commercially available.
Date: 16 December 2019

Peer review is a viable application for artificial intelligence (AI) software that identifies abnormalities on X-rays, say radiologists from New Delhi who described this application at the 2019 RSNA annual meeting. Read more here

Date: 9 December 2019

AI in medical imaging entered the consciousness of radiologists just a few years ago, notably peaking in 2016 when Geoffrey Hinton declared radiologists’ time was up, swiftly followed by the first AI startups booking exhibiting booths at RSNA Read more here

Date:18 October 2019

Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems.Read more here

Date: 13 September 2019

Google says its AI detects 26 skin conditions as accurately as dermatologistsRead more here