Compute over big data


In what time frame -if ever- do you think it will be possible to compute over large datasets (terabytes) efficiently, e.g. medical data / ML training?
What is the biggest obstacle for the Enigma network to handle big data?

1 Like

Intel SGX has a memory limit of 4GB. Theoretically secret contracts cannot handle more than 4GB of inputs.


It should be possible to do a lot of medical research and create a lot of value on the system despite that. After all, most of our current medical knowledge was accumulated without access to big data and with tiny sample sizes by modern standards. Even if we can’t do anything more advanced than basic genome-wide association studies on encrypted human genomes that would be an enormous breakthrough. It’s certainly a limitation that would be great to get past though so we can process image data.