<div dir="ltr">I agree that what we seem to have are "malicious user scenarios" and I think it will be good to document those. That, alongside a documented technical "trail" from capture to receiver-side verification should blunt many of the challenges casual parties will perceive. I think it's also a firm starting point for the IBA, who will indeed be testing both technical and human maliciousness in court...an evolving process.<div>
<br></div><div style>Thanks for this, Nathan.</div><div style><br></div></div><div class="gmail_extra"><br clear="all"><div>David M. Oliver | <a href="mailto:david@olivercoady.com" target="_blank">david@olivercoady.com</a> | <a href="http://olivercoady.com" target="_blank">http://olivercoady.com</a> | <a href="http://dmo.tel" target="_blank">http://dmo.tel</a> | @davidmoliver | +1 970 368 2366</div>
<br><br><div class="gmail_quote">On Mon, Sep 30, 2013 at 12:34 PM, Nathan of Guardian <span dir="ltr"><<a href="mailto:nathan@guardianproject.info" target="_blank">nathan@guardianproject.info</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div class="im">
<div>On 09/24/2013 04:02 PM, David Oliver
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>With IBA getting ready to openly discuss eyeWitness I was
reminded of a question Eric Johnson (Internews) asked at the
Martus training:</div>
<div>"How can we ACTUALLY know the image is genuine?" about
the validity of an image in InformaCam. <br>
</div>
</div>
</div>
</blockquote></div>
I've cc'd the dev list, since I think is an important discussion to
have with everyone.<br>
<br>
I think we need to come up with standard language with exactly how
much we are claiming to achieve with InformaCam. This is not unlike
Tor's unending with challenging on defining how much anonymity they
can actually provide. Often it comes down to deployment details or
specific malicious "bad actor" user stories, but it is necessary to
have these well documented, so we can point to them from the get go,
no matter how unlikely or edge case they might seem to be.<br>
<br>
At a high level InformaCam does not guarantee "This image is real",
since even in an unmanipulated photo you can easily have actors, or
staged props, etc. What InformaCam is attempting to do is say "this
picture or video began its documented existence at X point in time,
and here is a bunch of supporting evidence about its origins". It is
up to the organization receiving it to not blindly trust the data
that is is "genuine", but to use the evidence we provide to more
quickly and efficiently prove that something is what it claims to
be.<br>
<br>
Now, the real work is to ensure that process also works if the data
provided has been hijacked, modified or entirely generated from make
believe. So, let's dig into that.<div class="im"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>Working backwards, from a received image:</div>
<div>1. an image, as a file system entity, can have its hash
computed to see that hash </div>
<div>matches a separately-received hash</div>
<div>2. J3M extracted from an image was decrypted using a key
consistant with the key with which the J3M was encrypted</div>
<div>3. J3M contents have timestamp similar to (same as?)
captured image</div>
</div>
</div>
</blockquote>
<br></div>
Yes, these are all true and provide the most basic level of
verification. This proves that the media was not manipulated by a
"man in the middle" of any sort, and ensure, if that as the evidence
is handled down the line, that there is an initial verified state
snapshot to refer back to.<br>
<br>
In addition, the additional contents of the J3M data provide real
world data points that should corroborate what you are seeing in the
media file. If the data says its 6pm in September in Boston, and the
recorded heading is westerly, do you see the sun starting to set? Do
the cellular towers correspond to a North America carrier? (with
help of third-party lookup service). If I visit the place where the
media was captured, do I see some of the same wifi hotspot SSIDs? <br><div class="im">
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>And, probably not important in the "genuine" discussion
are these steps:</div>
<div>4. image was transfered securely from device to receiver
(HTTPS, Tor optional)</div>
<div><span> </span>- in the case of GDrive,
GlobalLeaks even adds anonymity</div>
<div>5. Image file is stored in encrypted storage on the
device after acquisition and before transmission</div>
<div>6. originally-gathered image is removed from unsecure
("gallery") storage</div>
</div>
</div>
</blockquote></div>
Yes, these are additional assurances that the InformaCam stack is
doing its best to defend all participants and content from
surveillance, manipulation, intrusion, retribution and more.<div class="im"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>All good. But, this is a pointless exercise in answering
Eric's question as any party can:</div>
<div>(1) use any available tool to capture and possibly doctor
an image before "ingesting" into InformaCam</div>
</div>
</div>
</blockquote></div>
So this is where the "take a boring picture" part comes into play.
When you do this, you submit a number of base frames (along with
your public key) to who ever it is that you are planning to submit
media submission to. Using a fairly well known, though not simple,
back end process, that can be done to compare the base frames
against the signature of submitted media, to check for manipulation
and similarity of camera hardware sensor artifacts.<br>
<br>
This is solving a different, but related, problem than InformaCam.
There are both research, open-source means of doing this, as well as
commercial products:<br>
<br>
"Large Scale Test of Sensor Fingerprint Camera Identification"<br>
<a href="http://www.ws.binghamton.edu/fridrich/research/EI7254-18.pdf" target="_blank">http://www.ws.binghamton.edu/fridrich/research/EI7254-18.pdf</a><br>
<br>
"
<span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:start;font-style:normal;display:inline!important;font-weight:normal;float:none;line-height:normal;color:rgb(34,34,34);text-transform:none;font-size:20px;white-space:normal;font-family:proxima-nova,Verdana,sans-serif;word-spacing:0px">
<span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:start;font-style:normal;display:inline!important;font-weight:normal;float:none;line-height:normal;color:rgb(34,34,34);text-transform:none;font-size:14px;white-space:normal;font-family:proxima-nova-n4,proxima-nova,Verdana,sans-serif;word-spacing:0px">FourMatch is an
extension for Adobe Photoshop that instantly analyzes any open
JPEG image to determine whether it is an untouched original from
a digital camera.</span>"</span><br>
<a href="http://www.fourandsix.com/fourmatch/" target="_blank">http://www.fourandsix.com/fourmatch/</a><span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:start;font-style:normal;display:inline!important;font-weight:normal;float:none;line-height:normal;color:rgb(34,34,34);text-transform:none;font-size:20px;white-space:normal;font-family:proxima-nova,Verdana,sans-serif;word-spacing:0px">FourMatch: Authenticate images instantly</span><div class="im">
<br>
<br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>(2) acquire InformaCam open source and create a PIRATE
app that creates any</div>
<div>imaginable metadata (reads it from a file, both image and
file on a PC?)</div>
</div>
</div>
</blockquote>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>(3) insert that metadata into the image in the valid
InformaCam way, including calculating</div>
<div>the hash to assure tamper-resistance. This image and
metadata can be encrypted using a perfectly valid key
received from an entity.</div>
</div>
</div>
</blockquote>
<br></div>
In the case of #2 and #3 above, this is definitely a unique issue
for InformaCam... so the image is non-modified, it is taken from the
actual camera, but the supporting sensor metadata is forged. The
GPS, compass data, wifi hotspot, and bluetooth id's, could all be
forged if someone created a malicious app. <br>
<br>
I think we have to dig further to determine exactly what could be
achieved with this sort of malicious use:<br>
<br>
1) ATTACK: An event could be documented at the correct location and
time, but time stamped in J3M for a different day.<br>
DEFENSE: this is why the Notary feature is important, such that we
correlate when the hash of the file was received, vs when the J3M
says it was created.<br>
<br>
2) ATTACK: the GPS and compass coordinates are forged to show
something happening somewhere it is not.<br>
DEFENSE: it is essentially we find a way to easily double check
details such as GPS, compass, cell towers, etc to make sure they all
verify both in data, and visually... perhaps we can easily use
Google Street View where it exists, and otherwise, an organization
could have investigators who return to the scene of the crime to
verify.<br>
<br>
I won't go on here, but I think we should tackle these types of
ATTACKS/DEFENSES soon and write them all down on the wiki, as a
start.<div class="im"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>Also, harder to do but perfectly possible to move this
process to the device itself</div>
<div>reading any imaginable image out of the unsecure image
gallery and then repeat process.</div>
</div>
</div>
</blockquote></div>
I don't think it is any harder to do from the app itself, then from
a desktop. It is the same, and perhaps easier, since our code exists
already for an Android app.<br>
<br>
We could possibly look into maintaining our code open-source, but
somehow using app DRM or other unique build-time keys, to help
ensure that a valid client app is actually being used.<div class="im"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>Result: un-genuine image said to be genuine.</div>
<div><br>
</div>
<div>Can we use the idea of "technical mischief" vs "human
mischief"? That is</div>
<div>can we get this right for all cases EXCEPT the case that
the human (actual user of the </div>
<div>app) decides to create mischief?</div>
</div>
</div>
</blockquote></div>
Yes, but I think we can go a long way to defend against humans as
well.<div class="im"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><br>
</div>
<div>Is it going to be important for IBA to lay out failure
cases? Or, are we simply going to use the failures we
discover to refine the app?</div>
<div>On this score, is "partial guarantee to be genuine"
valuable, or is this going to devolve into "there is no
partly-genuine option"?</div>
<br>
</div>
</div>
</blockquote></div>
Again, the goal is to provide enough supporting evidence that can be
trusted such that reporters, investigators, prosecutors, defenders,
etc, have a starting point they can trust. Today, they have nothing
except for the pixels they see, and perhaps a few words in a blog
post, youtube description, email or tweet. Your questions is spot on
in putting the burden on us to ensure that our users can trust the
additional proof points they are given, and that all of the value of
InformaCam is not hijacked by bad actors.<br>
<br>
I hope my response is a useful response to your and Eric's
questions, and as I said, we should ensure this is formally
documented and tackled. If we can also design additional features
for a v2 roadmap, or come up with a set of "Other Tools You Should
Use" for v1, that would be fantastic.<span class="HOEnZb"><font color="#888888"><br>
<br>
+n<br>
<br>
<br>
<br>
<br>
<br>
<br>
</font></span></div>
</blockquote></div><br></div>