[Ssc-dev] InformaCam: a question about "genuine-ness" of an InformaCam image

David Oliver david at olivercoady.com
Mon Sep 30 12:58:31 EDT 2013


I agree that what we seem to have are "malicious user scenarios" and I
think it will be good to document those.  That, alongside a documented
technical "trail" from capture to receiver-side verification should blunt
many of the challenges casual parties will perceive.  I think it's also a
firm starting point for the IBA, who will indeed be testing both technical
and human maliciousness in court...an evolving process.

Thanks for this, Nathan.


David M. Oliver | david at olivercoady.com | http://olivercoady.com |
http://dmo.tel | @davidmoliver | +1 970 368 2366


On Mon, Sep 30, 2013 at 12:34 PM, Nathan of Guardian <
nathan at guardianproject.info> wrote:

>  On 09/24/2013 04:02 PM, David Oliver wrote:
>
>  With IBA getting ready to openly discuss eyeWitness I was reminded of a
> question Eric Johnson (Internews) asked at the Martus training:
> "How can we ACTUALLY know the image is genuine?" about the validity of an
> image in InformaCam.
>
> I've cc'd the dev list, since I think is an important discussion to have
> with everyone.
>
> I think we need to come up with standard language with exactly how much we
> are claiming to achieve with InformaCam. This is not unlike Tor's unending
> with challenging on defining how much anonymity they can actually provide.
> Often it comes down to deployment details or specific malicious "bad actor"
> user stories, but it is necessary to have these well documented, so we can
> point to them from the get go, no matter how unlikely or edge case they
> might seem to be.
>
> At a high level InformaCam does not guarantee "This image is real", since
> even in an unmanipulated photo you can easily have actors, or staged props,
> etc. What InformaCam is attempting to do is say "this picture or video
> began its documented existence at X point in time, and here is a bunch of
> supporting evidence about its origins". It is up to the organization
> receiving it to not blindly trust the data that is is "genuine", but to use
> the evidence we provide to more quickly and efficiently prove that
> something is what it claims to be.
>
> Now, the real work is to ensure that process also works if the data
> provided has been hijacked, modified or entirely generated from make
> believe. So, let's dig into that.
>
>
>
>  Working backwards, from a received image:
> 1. an image, as a file system entity, can have its hash computed to see
> that hash
> matches a separately-received hash
> 2. J3M extracted from an image was decrypted using a key consistant with
> the key with which the J3M was encrypted
> 3. J3M contents have timestamp similar to (same as?) captured image
>
>
> Yes, these are all true and provide the most basic level of verification.
> This proves that the media was not manipulated by a "man in the middle" of
> any sort, and ensure, if that as the evidence is handled down the line,
> that there is an initial verified state snapshot to refer back to.
>
> In addition, the additional contents of the J3M data provide real world
> data points that should corroborate what you are seeing in the media file.
> If the data says its 6pm in September in Boston, and the recorded heading
> is westerly, do you see the sun starting to set? Do the cellular towers
> correspond to a North America carrier? (with help of third-party lookup
> service). If I visit the place where the media was captured, do I see some
> of the same wifi hotspot SSIDs?
>
>
>  And, probably not important in the "genuine" discussion are these steps:
> 4. image was transfered securely from device to receiver (HTTPS, Tor
> optional)
>  - in the case of GDrive, GlobalLeaks even adds anonymity
> 5. Image file is stored in encrypted storage on the device after
> acquisition and before transmission
> 6. originally-gathered image is removed from unsecure ("gallery") storage
>
> Yes, these are additional assurances that the InformaCam stack is doing
> its best to defend all participants and content from surveillance,
> manipulation, intrusion, retribution and more.
>
>
>
>  All good.  But, this is a pointless exercise in answering Eric's
> question as any party can:
> (1) use any available tool to capture and possibly doctor an image before
> "ingesting" into InformaCam
>
> So this is where the "take a boring picture" part comes into play. When
> you do this, you submit a number of base frames (along with your public
> key) to who ever it is that you are planning to submit media submission to.
> Using a fairly well known, though not simple, back end process, that can be
> done to compare the base frames against the signature of submitted media,
> to check for manipulation and similarity of camera hardware sensor
> artifacts.
>
> This is solving a different, but related, problem than InformaCam. There
> are both research, open-source means of doing this, as well as commercial
> products:
>
> "Large Scale Test of Sensor Fingerprint Camera Identification"
> http://www.ws.binghamton.edu/fridrich/research/EI7254-18.pdf
>
> " FourMatch is an extension for Adobe Photoshop that instantly analyzes
> any open JPEG image to determine whether it is an untouched original from a
> digital camera."
> http://www.fourandsix.com/fourmatch/FourMatch: Authenticate images
> instantly
>
>
>
>
>  (2) acquire InformaCam open source and create a PIRATE app that creates
> any
> imaginable metadata (reads it from a file, both image and file on a PC?)
>
>
>
>  (3) insert that metadata into the image in the valid InformaCam way,
> including calculating
> the hash to assure tamper-resistance.  This image and metadata can be
> encrypted using a perfectly valid key received from an entity.
>
>
> In the case of #2 and #3 above, this is definitely a unique issue for
> InformaCam... so the image is non-modified, it is taken from the actual
> camera, but the supporting sensor metadata is forged. The GPS, compass
> data, wifi hotspot, and bluetooth id's, could all be forged if someone
> created a malicious app.
>
> I think we have to dig further to determine exactly what could be achieved
> with this sort of malicious use:
>
> 1) ATTACK: An event could be documented at the correct location and time,
> but time stamped in J3M for a different day.
> DEFENSE: this is why the Notary feature is important, such that we
> correlate when the hash of the file was received, vs when the J3M says it
> was created.
>
> 2) ATTACK: the GPS and compass coordinates are forged to show something
> happening somewhere it is not.
> DEFENSE: it is essentially we find a way to easily double check details
> such as GPS, compass, cell towers, etc to make sure they all verify both in
> data, and visually... perhaps we can easily use Google Street View where it
> exists, and otherwise, an organization could have investigators who return
> to the scene of the crime to verify.
>
> I won't go on here, but I think we should tackle these types of
> ATTACKS/DEFENSES soon and write them all down on the wiki, as a start.
>
>
>
>  Also, harder to do but perfectly possible to move this process to the
> device itself
> reading any imaginable image out of the unsecure image gallery and then
> repeat process.
>
> I don't think it is any harder to do from the app itself, then from a
> desktop. It is the same, and perhaps easier, since our code exists already
> for an Android app.
>
> We could possibly look into maintaining our code open-source, but somehow
> using app DRM or other unique build-time keys, to help ensure that a valid
> client app is actually being used.
>
>
>
>  Result: un-genuine image said to be genuine.
>
>  Can we use the idea of "technical mischief" vs "human mischief"?  That is
> can we get this right for all cases EXCEPT the case that the human (actual
> user of the
> app) decides to create mischief?
>
> Yes, but I think we can go a long way to defend against humans as well.
>
>
>
>  Is it going to be important for IBA to lay out failure cases?  Or, are
> we simply going to use the failures we discover to refine the app?
> On this score, is "partial guarantee to be genuine" valuable, or is this
> going to devolve into "there is no partly-genuine option"?
>
>   Again, the goal is to provide enough supporting evidence that can be
> trusted such that reporters, investigators, prosecutors, defenders, etc,
> have a starting point they can trust. Today, they have nothing except for
> the pixels they see, and perhaps a few words in a blog post, youtube
> description, email or tweet. Your questions is spot on in putting the
> burden on us to ensure that our users can trust the additional proof points
> they are given, and that all of the value of InformaCam is not hijacked by
> bad actors.
>
> I hope my response is a useful response to your and Eric's questions, and
> as I said, we should ensure this is formally documented and tackled. If we
> can also design additional features for a v2 roadmap, or come up with a set
> of "Other Tools You Should Use" for v1, that would be fantastic.
>
> +n
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mayfirst.org/pipermail/ssc-dev/attachments/20130930/eacc9ced/attachment-0001.html>


More information about the Ssc-dev mailing list