posted on 30 Sep 2015 in security
Timing attacks (using time variations to get information about the key) are some of the most concerning cryptanalytic devices in use because they’re impossible to think about in a high-level language: You can’t tell what the compiler is going to do. As a result, tools like OpenSSL are built on enormously complex Perl-generated assembly code, which can be analyzed in that way.
It works, it’s fast, but it’s really hard to understand, hard to review, and most importantly, hard to trust, especially when problems show up every so often.
What if you wanted to be able to ignore these attacks on some function F? If the time doesn’t vary much, it’s basically free:
Simple enough: we remove the correlation between the data and the time taken at almost all time steps: we can’t reconstruct the timing information.
This sort of construction wraps around an existing implementation to provide additional security guarantees. This is what I’ve been calling a regulator, a very common idea in higher-level programming (e.g. type operators like
As another example, if we were concerned about memory access localization, we could use ordinary process isolation:
Which would stop an attack like Heartbleed immediately, because the whole point is that we can read other random places in memory, which would be in a different process here.
We can, of course, make one of these regulators for any externally-visible algorithmic property of untrusted optimized assembly code, and they compose; we can simply feed one into the other. We could do things like:
Because the regulators can be implemented once and put in a library, the approach can be applied very generally. This would work because it isolates us from worrying about the intricacies of
x86, and still lets us leverage the heavy-lifting done by other developers.
posted on 29 Sep 2015 in theory
The idea of Monte-Carlo is really simple: generate a lot of random inputs for your algorithm, and collect them together to get numbers that describe its performance, like average runtime. For most people, this means somewhere they’ll insert a line like:
for x in genRandoms(500):
But if the inside of that loop is slow, we don’t want to be overestimating the number of times we run it. How do we know that we need exactly 500 random numbers? Which random numbers should we be choosing?
Should we be choosing random numbers at all?
It’s not a crazy question. Ordinary sampling will take a long time to converge to an approximate empirical distribution: if we think of histogram bins as Poisson processes, then we need on the order of samples to get 1% standard deviation.
So for uniform distributions, maybe we can just use
range a lot of the time. Laying down tracks like that is called Quasi-Monte-Carlo, and is – so I’m told – used a lot in finance.
But let’s see if we can’t come up with a proper theory, without the heuristic of “yeah, just spread them out evenly”, for a general distribution. The problem is: which numbers do you give the function we want to randomize, so that the result of the composite procedure looks like that of the randomized one.
Well really what having more samples gets us is that we get more information at the output. This is exactly how much different the actual distribution is from what we would predict:
\( \phi \equiv p(x) - interpolate(x) \)
The logical nuance is that the interpolation is arbitrary: if our function is about linear at some input resolution , we should be linearly interpolating at resolution . The same thing goes if we know that the distribution looks like a mixture of Gaussians: we can use a gaussian kernel for the same purpose.
is really just our error from the distribution itself, up to some interpolation function.
Once we have a sample-list with sufficiently low , we can use it in place of our distribution, and it will compose the same way ordinary distributions compose: we can add them by concatenating the lists; we can multiply them by taking the cross-product; and so on.
Note that I haven’t even touched sample weighting: in reality, we would weight each sample by its probability, which prevents us having to deal with the fact that without that, probabilities decrease as we add samples.
Doing things this way, especially in the case of easily-analyzable distributions, can multiply the performance of a normal Monte-Carlo procedure many times, keeping the easy parallelizability, without introducing the complexity of something like the Method of Characteristics.
A really interesting recent idea with a similar motivation is Machine Teaching, which does a similar optimization in the case of ML training data, as opposed to the input to some function to be Monte-Carloed.☐
posted on 28 Sep 2015 in data-structures
Approximate Data Structures maintain approximations about incoming streams of data, like:
The problem is that they’re all parameterized by the sizes of the internal bit-arrays, hashes, and whatever else; it’s good that we can tune the parameters, but without a deep understanding of how they work, it’s hard to tell what to use. From an interface perspective, it’d probably be better to insulate our users from the algorithmic details.
So what do we do?
The first obvious thing to do is to look at some block of entries, then make a decision based on the characteristics of those entries: if we assume that we get a constant amount of unique items per batch, , and we know our total number of items has some upper bound, , then we should be able to choose a size appropriate for about items.
But this doesn’t work without a number upper bound (), which is another parameter our user would have to specify, so what we’d like to do is assert that one exists then behave accordingly. We’d like to batch our input with size , then join the resulting data structures in some way.
Here’s the leap.
Most of these data structures are linear:
In the case of “About how many did we see?” (CountMinSketch), we can just sum the counts from multiple instances of the data structure.
In the case of “Could we have seen before?” (BloomFilter), we can just or the outputs of multiple instances of the data structure.
In the case of “About how many unique elements have we seen” (HyperLogLog), we can have each data structure keep a sufficiently-large Bloom Filter and exclude elements from the first batch from the input to the next batches.
As an immediate consequence of being able to combine data structures, we should be able to change . We can control it based on the way the current set of data looks: if we’re seeing a lot of new elements, we can increase the size parameter a lot.
The simple case of this kind of control is the dynamic dictionary, as used in python, but this can be done in much more sophisticated ways: proportional control on the log-(# elements) is the easiest adjustment (Note that most of our savings here come from the fact that the storage is much smaller than the number of operations, which is why operating in log-space can be more efficient)
The result of using this kind of scalably self-adjusting data structure is a manageable infrastructure, one that maybe not our users, but definitely our ops people, can actually use.☐