I have done something similar, by decomposing images into signatures using wavelet transform.
My approach was to pick the most significant n coefficients from each transformed channel, and recording their location. This was done by sorting the list of (power,location) tuples according to abs(power). Similar images will share similarities in that they will have significant coefficients in the same places.
I found it was best to transform in the image into YUV format, which effectively allows you weight similarity in shape (Y channel) and colour (UV channels).
You can in find my implementation of the above in mactorii, which unfortunately I haven't been working on as much as I should have :-)
Another method, which some friends of mine have used with surprisingly good results, is to simply resize your image down to say, a 4x4 pixel and store that as your signature. How similar 2 images are can be scored by say, computing the Manhattan distance between the 2 images, using corresponding pixels. I don't have the details of how they performed the resizing, so you may have to play with the various algorithms available for that task to find one which is suitable.
A similar question was asked a year ago and has numerous responses, including one regarding pixelizing the images, which I was going to suggest as at least a pre-qualification step (as it would exclude very non-similar images quite quickly).
There are also links there to still-earlier questions which have even more references and good answers.
Here's an implementation using some of the ideas with Scipy, using your above three images (saved as im1.jpg, im2.jpg, im3.jpg, respectively). The final output shows im1 compared with itself, as a baseline, and then each image compared with the others.
>>> import scipy as sp
>>> from scipy.misc import imread
>>> from scipy.signal.signaltools import correlate2d as c2d
>>>
>>> def get(i):
... # get JPG image as Scipy array, RGB (3 layer)
... data = imread('im%s.jpg' % i)
... # convert to grey-scale using W3C luminance calc
... data = sp.inner(data, [299, 587, 114]) / 1000.0
... # normalize per http://en.wikipedia.org/wiki/Cross-correlation
... return (data - data.mean()) / data.std()
...
>>> im1 = get(1)
>>> im2 = get(2)
>>> im3 = get(3)
>>> im1.shape
(105, 401)
>>> im2.shape
(109, 373)
>>> im3.shape
(121, 457)
>>> c11 = c2d(im1, im1, mode='same') # baseline
>>> c12 = c2d(im1, im2, mode='same')
>>> c13 = c2d(im1, im3, mode='same')
>>> c23 = c2d(im2, im3, mode='same')
>>> c11.max(), c12.max(), c13.max(), c23.max()
(42105.00000000259, 39898.103896795357, 16482.883608327804, 15873.465425120798)
So note that im1 compared with itself gives a score of 42105, im2 compared with im1 is not far off that, but im3 compared with either of the others gives well under half that value. You'd have to experiment with other images to see how well this might perform and how you might improve it.
Run time is long... several minutes on my machine. I would try some pre-filtering to avoid wasting time comparing very dissimilar images, maybe with the "compare jpg file size" trick mentioned in responses to the other question, or with pixelization. The fact that you have images of different sizes complicates things, but you didn't give enough information about the extent of butchering one might expect, so it's hard to give a specific answer that takes that into account.
Best Answer
There is a discussion of image similarity algorithms at stack overflow. Since you don't need to detect warped or flipped images, the histogram approach may be sufficient providing the image crop isn't too severe.