[guardian-dev] CacheWord Key Derivation

Hans-Christoph Steiner hans at guardianproject.info
Wed Mar 19 14:32:48 EDT 2014

On 03/19/2014 01:20 PM, Stephen Lombardo wrote:
> On Wed, Mar 19, 2014 at 11:44 AM, Hans-Christoph Steiner <
> hans at guardianproject.info> wrote:
>> I don't think setting the iteration count to 100 was a bad choice, but a
>> realistic one based on real world constrains.
>> While it would be nice to have a super well mapped out understanding of the
>> iteration count, we need to keep in mind that exploiting strong crypto
>> because
>> there were not enough KDF iterations is still very rare, even at the NSA.
> While I agree that the selection of KDF work factor should be subject to
> specific application requirements, exploits based on weak / nonexistent key
> derivation are far from rare. These sorts of attacks are happening
> constantly on leaked databases, and plenty of automated tools exist to
> accelerate using COTS hardware. For mobile devices, physical access and
> forensic tools can facilitate access to application data files, or security
> issues like those recently discovered in WhatsApp can expose data
> inadvertently. This is already happening frequently and will become even
> more widespread in the future.
>> I think its fine to set a minimum, say 100, then use some automatic
>> iteration
>> calculator like what Michael already has, and just test it on a recent
>> device
>> to make sure it is setting a decent iteration count while staying within an
>> reasonable delay time for the calculation.
> From a technical perspective, 100 iterations is really just too weak. That
> is an order of magnitude lower than the recommended iteration count when
> the standard was proposed fourteen years ago. Using 100 iterations today is
> almost equivalent to not doing key derivation at all.
> I can certainly understand the practical position on performance, and that
> there were some initial decisions made due to those factors. However, there
> are surely ways to improve that with minimal engineering efforts. For
> instance, if the Java PBKDF2 implementation is slow, perhaps CacheWord
> could hand that responsibility off to OpenSSL to be executed in native code
> (IIRC, CacheWord already has native dependencies). For comparison, even
> 4000 native iterations on cheap 2009 era hardware has perfectly reasonable
> performance.
> From my perspective, a reasonably high default and, from a usability
> perspective, a fast implementation, is more important in the short-run than
> adaptive iteration. Adaptive iteration counts would certainly be nice to
> have, but you'd still need to take care to ensure that the algorithm
> doesn't wind up choosing low counts due to a poorly performing PBKDF2
> library. Thus, the priority should be on improving the performance enough
> for a good default first.
> Then move on to getting cacheword
>> really easy to use with solid usability, cacheword is close to that.  That
>> will drive adoption now, while few will care about the perfect KDF
>> iteration
>> calculator.
> Increased adoption and usability are important, but the CacheWord KDF issue
> is really both a technical and a marketing / adoption issue. The Guardian
> Project does a great job of engaging developers to ensure that they use
> available technology to make their applications more secure. Yet, if the
> use of a particular library would actually make an application less secure
> from a significant class of attacks, then a responsibility exists to
> disclose it. That is the case today with CacheWord and SQLCipher, since the
> use of the former could significantly reduce the protections of the latter.
> This likely presents a challenge for long term adoption, at least among
> security conscious developers.

I agree with all that you are saying.  What I am saying is that it needs to be
balanced with the realities that we are dealing with:

* the vast majority of apps having no local storage encryption at all
* development budgets and developer time are limited
* there are deadlines for shipping some of the software we are using this in
* by the time the database has been extracted from your phone, you are much
more likely to vulnerable to compulsion both legal [1][2][3] and via a
wrench[5], or just the weakness of the user's crappy password

If we aim to solve all the problems before releasing, we'll be waiting an
awfully long time.  The key is to set this up so that we can work more
iteratively.  Let's get a workable release out sooner, with the infrastructure
in place to upgrade the weak points as there is time and money to do so.  It
sounds like Abel already has the plan for working more iteratively.

[1] http://www.wired.com/threatlevel/2012/01/judge-orders-laptop-decryption/
[2] http://www.pogowasright.org/court-orders-sheriff-to-turn-over-password/
[5] https://www.xkcd.com/538/


PGP fingerprint: 5E61 C878 0F86 295C E17D  8677 9F0F E587 374B BE81

More information about the Guardian-dev mailing list