Arnold's cat map is the transformation defined by the matrix \(A_{cat} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix}\), the result of which is processed modulo \(1\). The mapping was first explored in the 1960's by the Russian mathematician Vladimir Arnold, who studied the transformation's effects on a picture of a cat. Arnold's cat map is a mapping from the two-dimensional torus to itself, but Arnold demonstrated that this toroidal automorphism can be applied to a two-dimensional picture by considering the image as being wrapped over a two-dimensional torus.

Given a square image, each pixel can be treated as a position vector, and the modulo is taken to be the length of any side of the image. This is equivalent to considering the image as a square of area \(1\) whose pixels are located at positions \((0,0) \leq (x,y) \leq (1,1)\). Arnold's cat map will take each pixel and move it to a different location, or different position vector, under the transformation:
\(\begin{bmatrix} x_{n-1} \\ y_{n-1} \\ \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix} \begin{bmatrix} x_{n} \\ y_{n} \\ \end{bmatrix} \space mod \space n\), where \(n\) is the length of any side of the image
The determinant of \(A_{cat}\) is \(\begin{vmatrix} 1 & 1 \\ 1 & 2 \\ \end{vmatrix} = (1)(2) - (1)(1) = 2 - 1 = 1\). A determinant of \(1\) means that the transformation induced by \(A_{cat}\) is area-preserving. So given an image of area \(n\), the transformation will produce another image of area \(n\) which will be a rearrangement of the individual pixels of the original image.
Arnold's cat map has the interesting effect of displaying simultaneous order and chaos. Under iterated transformations, an image is distorted into an apparent static, but through successive applications of the mapping the image is eventually reproduced. Throughout the iterations, "ghost" images of the original image sometimes appear, often inverted or as multiple tiled images of the original picture. In the below example, a square \(100\space x\space 100\) pixel image is processed through Arnold's cat map until the original image is eventually recovered. The number below each image signifies the number of iterations of the mapping which were needed to generate that image.

 
 
01

01

07

07

13

13

19

19

25

25

31

31

37

37

43

43

49

49

55

55

02

02

08

08

14

14

20

20

26

26

32

32

38

38

44

44

50

50

56

56

03

03

09

09

15

15

21

21

27

27

33

33

39

39

45

45

51

51

57

57

04

04

10

10

16

16

22

22

28

28

34

34

40

40

46

46

52

52

58

58

05

05

11

11

17

17

23

23

29

29

35

35

41

41

47

47

53

53

59

59

06

06

12

12

18

18

24

24

30 (halfway)

30 (halfway)

36

36

42

42

48

48

54

54

60

60

The following are a few select iterations of a larger version (\(162\space x\space 162\) pixels) of the same cat image in order to show the detail of the potential patterns and ghost images. While the same image had an iterative period of \(60\) when sized at \(100\space x\space 100\) pixels, the \(162\space x\space 162\) pixel version has an iterative period of \(216\).
01

01

72

72

108 (halfway)

108 (halfway)

136

136

The following are a few select iterations of a still larger version (\(220\space x\space 220\) pixels) of the cat image in order to show a few patterns and ghost images. This \(220\space x\space 220\) pixel version has an iterative period of \(24\).
01

01

07

07

12 (halfway)

12 (halfway)

16

16

One might expect that the larger the image size, the longer the iterative period. However, although the period increased from 60 for the \(100\space x\space 100\) image to 216 for the \(162\space x\space 162\) image, the period length dropped dramatically down to 24 for the \(220\space x\space 220\) image. In fact, although the topic has been explore, there is no known formula to determine the Arnold's cat map period of an image based upon its size or number of pixels.
There are other ways in which we can explore Arnold's cat map. We can use the eigenvalues and the eigenvectors of the matrix \(A_{cat}\) to understand precisely how the mapping transforms the image during each iteration.
To find the eigenvalues of \(A_{cat}\):
$$ \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix} \rightarrow \begin{bmatrix} 1-\lambda & 1 \\ 1 & 2-\lambda \\ \end{bmatrix} $$
$$ (1 - \lambda )(2 - \lambda ) - 1 = 0 $$
$$ \lambda^{2} - 3\lambda + 1 = 0 $$
$$ \lambda = \frac{-(-3) \pm \sqrt{(-3)^{2} - 4(1)(1)}}{2(1)} = \frac{3 \pm \sqrt{5}}{2} $$
So, there are two eigenvalues for Acat:
$$ \lambda_1 = \frac{3 + \sqrt{5}}{2} \space \space \space \space \space \space \space \lambda_2 = \frac{3 - \sqrt{5}}{2} $$
To find the eigenvectors of \(A_{cat}\):
$$ (A_{cat}-\lambda_{1}I)\vec{x_{1}} = \vec{0} $$
$$ \begin{bmatrix} 1-\lambda_1 & 1 \\ 1 & 2-\lambda_1 \\ \end{bmatrix} \begin{bmatrix} x_{1} \\ y_{1} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ \begin{bmatrix} 1-\frac{3 + \sqrt{5}}{2} & 1 \\ 1 & 2-\frac{3 + \sqrt{5}}{2} \\ \end{bmatrix} \begin{bmatrix} x_{1} \\ y_{1} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ (1-\frac{3 + \sqrt{5}}{2})x_{1} + (1)y_{1} = 0 \rightarrow (\frac{-1 - \sqrt{5}}{2})x_{1} + y_{1} = 0 \rightarrow y_{1} = (\frac{1 + \sqrt{5}}{2})x_{1} $$
So, for \(\lambda_{1}\), the associated eigenvector is:

\begin{bmatrix} 1 \\ \frac{1 + \sqrt{5}}{2} \\ \end{bmatrix}
$$ (A_{cat}-\lambda_{2}I)\vec{x_{2}} = \vec{0} $$
$$ \begin{bmatrix} 1-\lambda_2 & 1 \\ 1 & 2-\lambda_2 \\ \end{bmatrix} \begin{bmatrix} x_{2} \\ y_{2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ \begin{bmatrix} 1-\frac{3 - \sqrt{5}}{2} & 1 \\ 1 & 2-\frac{3 - \sqrt{5}}{2} \\ \end{bmatrix} \begin{bmatrix} x_{2} \\ y_{2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ (1-\frac{3 - \sqrt{5}}{2})x_{2} + (1)y_{2} = 0 \rightarrow (\frac{-1 + \sqrt{5}}{2})x_{2} + y_{2} = 0 \rightarrow y_{2} = -(\frac{1 + \sqrt{5}}{2})^{-1}y_{2} $$
So, for \(\lambda_{2}\), the associated eigenvector is:

\begin{bmatrix} 1 \\ -(\frac{1 + \sqrt{5}}{2})^{-1} \\ \end{bmatrix}
The fraction \(\frac{1 + \sqrt{5}}{2}\) is the golden ratio, \(\varphi\).

And so the two eigenvalues are \(1 + \varphi\) and \(2 - \varphi\), with eigenvetors \(\begin{bmatrix} 1 \\ \varphi \\ \end{bmatrix}\) and \(\begin{bmatrix} 1 \\ -\varphi^{-1} \\ \end{bmatrix}\) respetively.
The eigenvalues represent the amount by which the image is stretched in the direction of each associated eigenvector. So the image is stetched in the direction of \(\begin{bmatrix} 1 \\ \varphi \\ \end{bmatrix}\) by a factor of \(1 + \varphi\), and it is compressed in the direction of \(\begin{bmatrix} 1 \\ -\varphi^{-1} \\ \end{bmatrix}\) by a factor of \(2 - \varphi\). The streaks that appear in the image appear in the directions of the two eigenvectors.
Finally, the modulo \(1\) aspect of the transformation brings the entire area of the new, deformed image back into the square bounds of the original image. If the length of each square below represents \(k\) units, then after the modulo function is considered a pixel's location in the x direction becomes the remainder of its x-coordinate under the \(A_{cat}\) transformation divided by \(k\).
For instance, take an image which is \(100\space x\space 100\) pixels. Under Arnold's cat map, the pixel at position \((30,80)\) will be transformed by \(A_{cat}\) to the position \(\begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix} \begin{bmatrix} 30 \\ 80 \\ \end{bmatrix} = \begin{bmatrix} (1)(30) + (1)(80) \\ (1)(30) + (2)(80) \\ \end{bmatrix} = \begin{bmatrix} 110 \\ 190 \\ \end{bmatrix} \equiv_{100} \begin{bmatrix} remainder\space of\space 110/100 \\ remainder\space of\space 190/100 \\ \end{bmatrix} = \begin{bmatrix} 10 \\ 90 \\ \end{bmatrix}\)