Our XPRIZE Entry

What technology does your test involve?

If isothermal, tell us more about your isothermal technology
Loop-mediated isothermal amplification (LAMP)

Does your test detect nucleic acid or protein?
Nucleic Acid

If nucleic acid, how many sequences do you target?
2, ORF1a and N

What read-out technology do you use?
Spectrometric (including Colorimetric)

What is your input volume per test?
500uL or 2000uL

What sample sources are you planning to use once operational?
Nasal Swab

Which application best defines your test?
Distributed Lab

What is your limit of detection (LoD)?
1-5 copies/μL

What targets is your limit of detection based on?
Our estimated limit of detection is based upon both our experiments (spiking ZeptoMetrix inactivated virions into raw samples and Twist RNA into inactivated samples) and what has been found by Rabe, Cepko, Anahtar et al in running the same assay chemistry. We have yet to do the full 20 replicates at several levels near the LoD to precisely determine it, as we have instead prioritized optimizing various aspects of the workflow (primarily handling the silica pellet prior to amplification). Our goal for the LoD is a real world raw sample level of 1e4/mL, as this gets us to 1e5/mL for a pool of 10. This is a level significantly below what has been cited as the suspected threshold for infectiousness of 1e6/mL (level “Capable to Infect Others”). A viral load of 1e5/ml is the level that the CDC seems to be recommending for new “less sensitive” tests. The viral target which we are detecting is a combination of ORF1a and N genes (using AS1e and N2 primers).

What sample types are used for your limit of detection?
Nasal Swab

Are results based off of clinical or contrived samples?

What sample types are used for your results
Nasal Swab

Median positive sample concentration (X LoD)
NA, have not yet run clinical validation pilots

What is your positive percent agreement (ppa)?
NA, have not yet run clinical validation pilots

How many positive samples were used?
NA, have not yet run clinical validation pilots

What is your negative percent agreement (npa)?
NA, have not yet run clinical validation pilots

How many negative samples were used?
NA, have not yet run clinical validation pilots

Are your results qualitative or quantitative?

Have you conducted cross reactivity experiments?

How many tests do you currently run per day?
Approximately 50. We are still in development, not yet running pilots or production. Last week we ran a batch of 24 diluted samples at the 2mL volume (for 10 pools)—a simulation of a run that would comprise 240 individuals.

How many tests could you run per day with current setup?
200,000, but this is difficult to answer and depends on what is meant by “current setup.” Our organization is still doing development but we are growing quickly. The 200,000 number is based on our current leased-lab space at MBC Biolabs, which includes 2 biosafety cabinets, 4 lab bench, and 1 chemical fume hood. It is based on a pooling level of 100, which our LoD supports while still reaching the threshold of infectiousness (~1e6/mL). Likely we will not run many 100 pools immediately and will primarily run 10 pools. The 200,000 number is also based on a 1 hr hands-on time per batch using multichannel pipettes, which is the first configuration we are optimizing in order to spread this screening capability and enable other labs without automation to scale quickly. With a single liquid handling robot, such as an OpenTrons or Bravo (which we intend to bring online in Sept), the hands-on time per batch would go down to minutes and the same personnel could run at least 10X the number of samples/pools.bring online in Sept), the hands-on time per batch would go down to minutes and the same personnel could run at least 10X the number of samples/pools.

How long does it take to go from sample collection to results (minutes)?
2 hrs, though again, this is difficult to answer and can be misleading for our program. We are integrating several components of a screening system to achieve mass scale. Our screening system is designed to find unknown new infections among large populations which will be re-screened frequently. Therefore, a 12 hr turnaround for results is sufficient. Through planning or fast-tracking batches, we could reasonably expect 4 hr sample-to-result times.

What is the hands-on time?
1 hr per batch of 45 pooled samples (pool level of 10-100) if using multichannel pipettors. 5-10 minutes if using liquid handling robots.

How many tests can be run per day with one setup?
10K per 24 hr day, assuming no automation, several shifts, and a total of about 10 staff. This scales to 100K+/day with the addition of automation.

Could the test be adapted to point of care?

Capital expense
Less than $10K to purchase all the equipment for the baseline lab configuration from scratch. However, most labs will likely already have most of the necessary equipment.

Estimated cost per test
<$1, highly dependent on pool level. Current per pool costs are dominated by the NEB LAMP MM (1804) which is $2/rxn is small volumes and approx $0.75 in very large volumes. Consumables cost per pool are currently approximately $5, dropping to <$3 in large volumes. The cost could potentially drop significantly further if the open source LAMP Master Mix using the HIV-1 RT is produced and made available to the LAMP community at a cost less than NEB’s product.

Estimated cost per run
For the non-automated configuration using multichannel pipettes, a batch size of 45 samples would currently cost approximately $150 in consumables and need 3 hours of labor. At $40/hr labor charge, the batch cost comes to $270. Assuming the standard pool size of 10, that covers 450 people for a primary screen.

Is this test capable of pooling samples?

Do you currently pool samples?

If yes, how many samples do you currently pool?
We typically pool 2-5 individuals for our development runs, but do so in a total volume of 5mL (assumes 0.5mL per individual in pool). One of the key concerns for pooling larger numbers of people (even 10) is whether inhibitors or adulterants present in one sample will cause a failure of the pool. We point to China’s success at large scale sample pooling at the level of 10 using cheek swabs as evidence for optimism.

What are the current limitations to scale this test?
The key limitation of our current configuration is not using automation. However, development of this mode is intentional, as access to liquid handling robots and the resources to feed them would be limiting for many, many labs. With multichannel pipetting and the baseline configuration we are developing, those labs can scale to 10K+ people per day screened. Another limitation is the use of silica rather than magnetic beads. Again, this choice has been intentional due to the availability and cost of magnetic capture beads. This is something we will investigate and consider bringing up in parallel to silica. Practically, we are currently limited in resources—both funding and staffing. We closed our first seed investment last week and expect more funding soon. Recruiting qualified employees is particularly challenging now, but we intend to do further outreach and publicize our efforts very soon.efforts very soon.

(BIO SAFETY) Do you use standard PPE & biohazard waste procedures to ensure personnel & biohazard safety?

(BIO SAFETY) Do you have a unique or innovative way to ensure personnel or biohazard safety?

(DATA) How do you collect & process results?

(DATA) Do you store patient results?

If yes, how do you ensure data & result privacy & safety?
Using industry standard procedures. Personal Identifying Information is carefully protected by assigning non-PII unique identifiers that are utilized to track samples and pools.

(DATA) How do you report results?

If other, please specify
Through a custom app, with the participants selecting their method of notification. In-app notification is the most secure, however some participants may choose less secure but more convenient direct notification by text or email. Anonymized, aggregated results will also be reported to organizations and participants per specific agreements.

(DATA) Do you have an innovative way to process data and report results, such as an app?

If yes, tell us about your innovative method
Yes, we have developed a custom app with a partner company, Appivo, that has a low-code app development platform. The alpha version of our app is currently under review by Apple. Through the app, participants collect individual samples and self-pooled samples, thus greatly streamlining the overall system. The collection process is initiated and supervised by a “sponsor” who is typically a member of the pool and who has accepted responsibility for understanding and implementing the proper collection process. This is facilitated by in-app instruction (including video links), which take a few minutes. The app has been optimized for a smooth user experience and for repeated screening by pre-populating with previous collection information. Pooled collection including minors with parents/guardians is included.

If yes, is this method compatible with sample pooling?

(DATA) Can you integrate with employers/admin for tracking?

(DATA) can you integrate with public domain trackers (i.e. Apple, Google)?
Yes, our app has an API.

(DATA) Do you have a unique or innovative way collect samples?

If yes, how do you ensure data & result privacy & safety?
The Appivo platform has built-in industry standard security. Appivo has developed apps that include health data for NGOs, and we are leveraging legal and privacy elements of those apps. The Appivo platform enables separate secure instances to be spun up, siloing separate organizations data. The Appivo platform also enables customization of the app—the branding, the design, and the actual functionality. With the mission to spread mass screening capability, FloodLAMP will license the app to other partner organizations, such as universities, which can customize it to suit any specific needs.

What makes your test unique? What is your biggest innovation?
FloodLAMP’s innovation is combining currently available technology into a highly efficient, integrated infectious disease screening program that can scale—and doing so in a truly open way. New technologies have enormous potential, but it’s not clear if any will be ready in 2020. Both well-funded startups and large diagnostics companies will surely bring online significant additional testing capacity, but most of that will be on closed systems or in closed labs, and will be at the highest price the market will bear. Some new options will have impactful tradeoffs, such as antigen tests with LoD’s above the threshold for infectiousness. Incentives have not been properly set to encourage the development of a program that any basic lab can affordably bring up and run at significant scale. FloodLAMP is building upon the foundational work of others to combine a sensitive, super cheap, flexible and molecular assay with streamlined sample collection. We are openly not just our protocols but the wrap around processes for a dedicated screening program that is designed to be accessible for all other labs. At the same time we are soliciting help in best practices, under a structure where that knowledge is shared and disseminated, not just used in a limited, closed offering. In short, we’re bringing open source to biotech, helping to create the Linux of infectious disease screening. We’re building on the current important open efforts (such as JOGL, gLAMP, shared protocol websites) and implementing an integrated screening program to address the global COVID-19 crisis.

Opentrons is partnering with XPRIZE to support teams with liquid handling robots during the pilot phase. Please tell us whether your test can benefit from liquid handling automation and how you might use (or are already using) the opentrons liquid handler.
Yes, we can benefit greatly from liquid handling automation. We plan to develop the next configuration of our assay protocol around the OpenTrons robot. There is one at our shared lab facility (MBC Biolabs in San Carlos) that we would like to gain access to in mid Sept. We have consulted with the automation expert at Denali Pharmaceuticals who planned to automate the Rabe Cepko assay, which primarily involves the silica washing steps. We have extensive experience in automating assay protocols involving silica microparticles, through FloodLAMP founder’s previous startup True Materials. Affymetrix acquired True Materials in 2008, and we automated several processes for pilot production of liquid arrays using the True Materials’ silica microbarcodes on a Biomek Fx, plate washers, and vacuum aspirators. The OpenTrons system is ideal for our automation development because of the low upfront cost of the system and the open source approach of the company.

Please tell us any reasons the proficiency or clinical tests may not accurately recapitulate how well your test works.
The buffer that the proficiency samples are in may not be compatible with our nucleic acid binding protocol. At a high level, we are not just developing a test (or assay protocol, that’s already been done by Rabe and Cepko and their clinical collaborators, Anahtar et al)—we are developing an integrated screening program. That being said, many parts of the system are plug and play. For example, with a slight modification of our existing protocol (elution from the dried pellet), we can go into qPCR as well. We have done almost all of our development on real human samples, starting with raw saliva and soaked nasal swabs. We inactivate those samples with a chemical reducing agent, TCEP/EDTA per the Rabe Cepko protocol. The next step of the main assay protocol uses a high salt solution (NaI) along with the prepared silica for nucleic acid binding, and that may not work or work as well without the TCEP. For our no template controls, we use 1X PBS with the corresponding amount of the TCEP Inactivation Solution. We have not yet run our assay protocol with VTM or other sample collection buffers, as we will collect and inactivate using our protocol.

none submitted