Secure Multi-User Linearly-Separable Distributed Computing
Abstract
The introduction of the new multi-user linearly-separable distributed computing framework, has recently revealed how a parallel treatment of users can yield large parallelization gains with relatively low computation and communication costs. These gains stem from a new approach that converts the computing problem into a sparse matrix factorization problem; a matrix that describes the users' requests, is decomposed as (F = DE), where a (γ)-sparse (E) defines the task allocation across servers, and a (δ)-sparse (D) defines the connectivity between (N) servers and (K) users as well as the decoding process. While this approach provides near-optimal performance, its linear nature has raised data secrecy concerns. We here adopt an information-theoretic secrecy framework, seeking guarantees that each user can learn nothing more than its own requested function. In this context, our main result provides two necessary and sufficient secrecy criteria; (i) for each user (k) who observes server responses, the common randomness visible to that user must span a subspace of dimension exactly , and (ii) for each user, removing from (\mathbf{D}) the columns corresponding to the servers it observes must leave a matrix of rank at least (K-1). With these conditions in place, we design a general scheme -- that applies to finite and non-finite fields alike -- which is based on appending to (\mathbf{E}) a basis of (\mathrm{Null}(\mathbf{D})) and by carefully injecting shared randomness. In many cases, this entails no additional costs. The scheme, while maintaining performance, guarantees perfect information-theoretic secrecy in the case of finite fields, while in the real case, the conditions yield an explicit mutual-information bound that can be made arbitrarily small by increasing the variance of Gaussian common randomness.
Source: arXiv:2602.02489v1 - http://arxiv.org/abs/2602.02489v1 PDF: https://arxiv.org/pdf/2602.02489v1 Original Article: View on arXiv