• Articles
  • About
  • Contact
  • Home
Menu

meditations in mathematics

Street Address
City, State, Zip
Phone Number

Your Custom Text Here

meditations in mathematics

  • Articles
  • About
  • Contact
  • Home

Deep Structure

September 28, 2015 Lipa Long
What makes us who we are? Is there some set of interior qualities which defines us, or is meaning derived through our interconnectedness and the relationships we hold with our environment? Let's consider the role of connection, context, and structure by looking at isomorphisms, a special type of function found within category theory. This field is an abstraction of mathematics itself, formalizing category-wide generalizations and providing tools to universalize truths among a wide range of structures, revealing surprising connections and insights.

An isomorphism is a structure-preserving mapping between sets of elements from within a particular class of mathematical objects, such as groups, fields, or ordered sets. What makes an isomorphism isomorphic is that this type of function doesn’t change the relationships between the elements within the sets. In a sense, isomorphisms illuminate that many seemingly uniquely determined mathematical sets are simply different colorings of the same underlying structure. The sets effectively behave in the same way and look structurally similar given a correlated renaming of their elements. So while the elements themselves may be different, how those elements are interrelated and the ways in which they interact with one another are identical.

Let's see what this looks like in practice by exploring a few mathematical groups.

- - - - - - - - - -

Example \(1\):

Let group \(A\) be the set of all of the symmetries of an equilateral triangle\(^1\). These symmetries are the unique ways in which we can move a triangle such that the shape looks exactly the same after the motion is performed. An equilateral triangle has \(6\) distinct symmetries, as shown below, which reproduce the triangle exactly.

Group \(A\):

\(r_0\) - rotation of \(0^{\circ}\) (note that this is the same as a rotation of \(360^{\circ}\))
\(r_1\) - rotation of \(120^{\circ}\)
\(r_2\) - rotation of \(240^{\circ}\)
\(s\) - reflection about the bisector through the top apex
\(t\) - reflection about the bisector through the lower left apex
\(u\) - reflection about the bisector through the lower right apex
Let group \(B\) be the set of all permutations of \(3\) distinct elements\(^2\), denoted by \(a\), \(b\), and \(c\). There are \(3! = 6\) such permutations.

Group \(B\):

\(B_1 = \begin{bmatrix} a & b & c \\ a & b & c \\ \end{bmatrix}\) \(\qquad\) \(B_2 = \begin{bmatrix} a & b & c \\ b & c & a \\ \end{bmatrix}\) \(\qquad\) \(B_3 = \begin{bmatrix} a & b & c \\ c & a & b \\ \end{bmatrix}\)

\(B_4 = \begin{bmatrix} a & b & c \\ b & a & c \\ \end{bmatrix}\) \(\qquad\) \(B_5 = \begin{bmatrix} a & b & c \\ c & b & a \\ \end{bmatrix}\) \(\qquad\) \(B_6 = \begin{bmatrix} a & b & c \\ a & c & b \\ \end{bmatrix}\)

Both of these sets are defined as groups under the operation of composition: \(\circ\). For instance, in group \(A\), \(t \circ r_1 = s\) because, remembering that when evaluating a composition we go from right to left, we first rotate the triangle \(120^{\circ}\) and then we reflect the triangle about the bisector through the lower right apex. Check for yourself that you will end up with the same orientation of vertices as if you simply performed a single \(s\) transformation. In group \(B\), \(B_4 \circ B_2 = B_6\) because: \(B_2\) takes \(a\) to \(b\) and then \(B_4\) takes \(b\) to \(a\); \(B_2\) takes \(b\) to \(a\) and then \(B_4\) takes \(a\) to \(c\); and \(B_2\) takes \(c\) to \(c\) and then \(B_4\) takes \(c\) to \(b\). So, \(B_4 \circ B_2\) takes \(a\) to \(a\), \(b\) to \(c\), and \(c\) to \(b\), which is represented by the single permutation \(B_6\).

Now we will make a table for each group to see how all of the elements of that group interact with one another under composition.\(^3\)
Group A

Group A

Group B

Group B

Upon examination, one finds that the structure of these two tables is exactly the same. This is seen by a simply relabeling. In the table for group \(B\), relabel as follows:

\(B_1 - r_0 \qquad B_2 - r_1 \qquad B_3 - r_2 \qquad B_4 - s \qquad B_5 - t \qquad B_6 - u\)

This will produce exactly the table for group \(A\). Thus, the relationships between the elements both within group \(A\) and within group \(B\) have identical relational correspondences. The elements interact with one another in precisely the same way.

We therefore say that group \(A\) is isomorphic to group \(B\).

- - - - - - - - - -

Example \(2\):

We will again use group \(B\) from example \(1\): the set of all permutations of \(3\) elements, denoted by \(a\), \(b\), and \(c\), under the operation of composition. Group \(C\) will be the set of all \(2\times 2\) matrices whose entries are elements of the group of integers modulo \(2\) (ie. the entries can only be either \(0\) or \(1\), and \(1 + 1 = 0\)), and whose determinant is not \(0\).\(^4\) The operation for this group will be matrix multiplication.

Group \(B\):

\(B_1 = \begin{bmatrix} a & b & c \\ a & b & c \\ \end{bmatrix}\) \(\qquad\) \(B_2 = \begin{bmatrix} a & b & c \\ b & c & a \\ \end{bmatrix}\) \(\qquad\) \(B_3 = \begin{bmatrix} a & b & c \\ c & a & b \\ \end{bmatrix}\)

\(B_4 = \begin{bmatrix} a & b & c \\ b & a & c \\ \end{bmatrix}\) \(\qquad\) \(B_5 = \begin{bmatrix} a & b & c \\ c & b & a \\ \end{bmatrix}\) \(\qquad\) \(B_6 = \begin{bmatrix} a & b & c \\ a & c & b \\ \end{bmatrix}\)


Group \(C\):

\(C_1 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}\) \(\qquad\) \(C_2 = \begin{bmatrix} 0 & 1 \\ 1 & 1 \\ \end{bmatrix}\) \(\qquad\) \(C_3 = \begin{bmatrix} 1 & 1 \\ 1 & 0 \\ \end{bmatrix}\)

\(C_4 = \begin{bmatrix} 1 & 1 \\ 0 & 1 \\ \end{bmatrix}\) \(\qquad\) \(C_5 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\) \(\qquad\) \(C_6 = \begin{bmatrix} 1 & 0 \\ 1 & 1 \\ \end{bmatrix}\)


Let's restate the table for group \(B\), which was given in example \(1\), and then make a table for the elements in group \(C\) under the operation of matrix multiplication. Keep in mind when performing the matrix multiplications that because the entries of each matrix are elements of \(\mathbb{Z}_2\), \(1 + 1 = 0\).
Group B

Group B

Group C

Group C

Note that, again, the structure of the tables is exactly the same. In group \(C\), we need only replace each instance of the letter \(C\) by the letter \(B\) and we will have exactly reproduced the table for group \(B\).

Thus, group \(B\) is isomorphic to group \(C\). And because isomorphisms are transitive and group \(A\) is isomorphic to group \(B\), group \(A\) is therefore also isomorphic to group \(C\).

- - - - - - - - - -

Finding an isomorphism between two groups is an extremely powerful discovery because a vast number of group properties are preserved under isomorphism; knowing something about one of the groups immediately tells us something about the other group. Some of the preserved group properties include: the order of the group, whether or not the group is abelian\(^5\), isomorphic relationships to other groups, the number of elements of a given order within the group, and a vast wealth more.

In fact, we could, in a sense, "do the math" of one group using the correlated elements of the other group under their respective operation, if it proved easier or more convenient to do so. This allows us great flexibility and creates a kind of meta-math in which we can perform mathematics in a highly abstracted way.

Under this notion, the isomorphism between groups \(A\) and \(C\) means that we can treat the matrices of group \(C\) as the symmetries of an equilateral triangle! And both of the matrices of \(C\) and the symmetries of \(A\) can be treated the permutations of group \(B\). This profound insight reveals the depth of the notion of isomophisms. Triangle symmetries originate within a geometric context; permutations arise within the realm of combinatorics; matrices are abstract concepts from linear algebra. These three sets come from very different areas within mathematics, but their isomorphic relationship reveals that structurally each of these groups feature the same underlying organization and form.

- - - - - - - - - -

Is everything reducible to structure?

If two sets are structurally the same, are they the same set? From where does a set derive its meaning? Is it through its elements or through the relationships between them? Every group is isomorphic to a group of permutations (Caley's Theorem). Should we continue talking about the symmetries of triangles or about the matrices of group \(C\), or should we simply speak of permutations? Each of these groups has its own context and origin. Does this context provide a source for significance in isomorphic group variety?

What does it mean for two sets to not be isomorphic? Do they have fundamentally different and detached natures or do they also somehow link together within a larger, underlying structure?

We hold relationships with people, with non-human life, with ourselves, with the planet and environment, with feelings, with ideas, and with the universe as a whole. Can we find parallels between the ways in which these connections operate? How are they the same? Are there differences? Can we learn healthier, more growing ways of interacting within each of these relational fields by exploring our connections within other areas?

Emergent properties are those which are present within a system or an aggregate of objects, but which are not present in the individual particles or constituents of that system. For instance, the physical arrangement of H2O molecules into liquid water produces a substance which has the property of being "wet", but the molecules themselves (and other arrangements of them, such as ice or gas) are not wet. Rearranging the same components into different relational patterns can therefore cause different properties to arise. Would groups with identical underlying structure produce similar emergent properties?

Does anything exist which can be separated from its interrelational context? Is there something which can be isolated and interpreted outside of the interwoven landscape of reality? Is it meaningful to talk about objects without considering their connection to the deeper structure within which they're embedded?

- - - - - - - - - -



(1) In group theory, this is the dihedral group \(D_3\).
(2) In group theory, this is the symmetry group \(S_3\).
(3) This kind of table is called a Caley table.
(4) In group theory, this is the general linear group of degree \(2\) over \(\mathbb{Z}_2\) and is denoted \(GL_2(\mathbb{Z}_2)\).
(5) A group is abelian if for any two elements, \(a\) and \(b\), of the group, \(a \circ b = b \circ a\), where \(\circ\) is the operation of that group.
In Pro Tags abstract algebra, category theory, group theory
Comment

Ghosts in Chaos

July 16, 2015 Lipa Long
What does chaos do to information? When organization dissolves into static, is the information lost? Can it be recovered? We'll explore these questions through Arnold's cat map, a chaotic mapping rooted within the study of dynamical systems.

Arnold's cat map is the transformation defined by the matrix \(A_{cat} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix}\), the result of which is processed modulo \(1\). The mapping was first explored in the 1960's by the Russian mathematician Vladimir Arnold who studied the transformation's effects on a picture of a cat, hence the mapping's whimsical name. Arnold's cat map is a mapping from the two-dimensional torus to itself, but Arnold demonstrated that this toroidal automorphism can be applied to a two-dimensional picture by considering the image as being wrapped over a two-dimensional torus.

Given a square image, each pixel can be treated as a position vector, and the modulo is taken to be the length of any side of the image. This is equivalent to considering the image as a square of area \(1\) whose pixels are located at positions \((0,0) \leq (x,y) \leq (1,1)\). Arnold's cat map will take each pixel and move it to a different location, or different position vector, under the transformation:

\(\begin{bmatrix} x_{n-1} \\ y_{n-1} \\ \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix} \begin{bmatrix} x_{n} \\ y_{n} \\ \end{bmatrix} \space mod \space n\), where \(n\) is the length of any side of the image


The determinant of \(A_{cat}\) is \(\begin{vmatrix} 1 & 1 \\ 1 & 2 \\ \end{vmatrix} = (1)(2) - (1)(1) = 2 - 1 = 1\). A determinant of \(1\) means that the transformation induced by \(A_{cat}\) is area-preserving. So given an image of area \(n\), the transformation will produce another image of area \(n\) which will be a rearrangement of the individual pixels of the original image.

Arnold's cat map has the interesting effect of displaying simultaneous order and chaos. Under iterated transformations, an image is distorted into an apparent static, but through successive applications of the mapping the image is eventually reproduced. Throughout the iterations, "ghost" images of the original image sometimes appear, often inverted or as multiple tiled images of the original picture. In the below example, a square \(100\space x\space 100\) pixel image is processed through Arnold's cat map until the original image is eventually recovered. The number below each image signifies the number of iterations of the mapping which were needed to generate that image.
 
 
01

01

02

02

03

03

04

04

05

05

06

06

07

07

08

08

09

09

10

10

11

11

12

12

13

13

14

14

15

15

16

16

17

17

18

18

19

19

20

20

21

21

22

22

23

23

24

24

25

25

26

26

27

27

28

28

29

29

30 (halfway)

30 (halfway)

31

31

32

32

33

33

34

34

35

35

36

36

37

37

38

38

39

39

40

40

41

41

42

42

43

43

44

44

45

45

46

46

47

47

48

48

49

49

50

50

51

51

52

52

53

53

54

54

55

55

56

56

57

57

58

58

59

59

60

60

 
The following are a few select iterations of a larger version (\(162\space x\space 162\) pixels) of the same cat image in order to show the detail of the potential patterns and ghost images. While the same image had an iterative period of \(60\) when sized at \(100\space x\space 100\) pixels, the \(162\space x\space 162\) pixel version has an iterative period of \(216\).
 
 
01

01

72

72

108 (halfway)

108 (halfway)

 
The following are a few select iterations of a still larger version (\(220\space x\space 220\) pixels) of the cat image in order to show a few patterns and ghost images. This \(220\space x\space 220\) pixel version has an iterative period of \(24\).
 
 
01

01

08

08

12 (halfway)

12 (halfway)

 
One might expect that the larger the image size, the longer the iterative period. However, although the period increased from \(60\) for the \(100\space x\space 100\) image to \(216\) for the \(162\space x\space 162\) image, the period length dropped dramatically down to \(24\) for the \(220\space x\space 220\) image. In fact, although the topic has been explore, there is no known formula to determine the Arnold's cat map period of an image based upon its size or number of pixels.

There are other ways in which we can explore Arnold's cat map. We can use the eigenvalues and the eigenvectors of the matrix \(A_{cat}\) to understand precisely how the mapping transforms the image during each iteration.

To find the eigenvalues of \(A_{cat}\):

$$ \begin{bmatrix} 1 & 1 \\ 1 & 2 \\ \end{bmatrix} \rightarrow \begin{bmatrix} 1-\lambda & 1 \\ 1 & 2-\lambda \\ \end{bmatrix} $$
$$ (1 - \lambda )(2 - \lambda ) - 1 = 0 $$
$$ \lambda^{2} - 3\lambda + 1 = 0 $$
$$ \lambda = \frac{-(-3) \pm \sqrt{(-3)^{2} - 4(1)(1)}}{2(1)} = \frac{3 \pm \sqrt{5}}{2} $$

So, there are two eigenvalues for \(A_{cat}\):

$$ \lambda_1 = \frac{3 + \sqrt{5}}{2} \space \space \space \space \space \space \space \lambda_2 = \frac{3 - \sqrt{5}}{2} $$

To find the eigenvectors of \(A_{cat}\):
$$ (A_{cat}-\lambda_{1}I)\vec{x_{1}} = \vec{0} $$
$$ \begin{bmatrix} 1-\lambda_1 & 1 \\ 1 & 2-\lambda_1 \\ \end{bmatrix} \begin{bmatrix} x_{1} \\ y_{1} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ \begin{bmatrix} 1-\frac{3 + \sqrt{5}}{2} & 1 \\ 1 & 2-\frac{3 + \sqrt{5}}{2} \\ \end{bmatrix} \begin{bmatrix} x_{1} \\ y_{1} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ (1-\frac{3 + \sqrt{5}}{2})x_{1} + (1)y_{1} = 0 \rightarrow (\frac{-1 - \sqrt{5}}{2})x_{1} + y_{1} = 0 $$
$$ y_{1} = (\frac{1 + \sqrt{5}}{2})x_{1} $$
So, for \(\lambda_{1}\), the associated eigenvector is:

\begin{bmatrix} 1 \\ \frac{1 + \sqrt{5}}{2} \\ \end{bmatrix}
$$ (A_{cat}-\lambda_{2}I)\vec{x_{2}} = \vec{0} $$
$$ \begin{bmatrix} 1-\lambda_2 & 1 \\ 1 & 2-\lambda_2 \\ \end{bmatrix} \begin{bmatrix} x_{2} \\ y_{2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ \begin{bmatrix} 1-\frac{3 - \sqrt{5}}{2} & 1 \\ 1 & 2-\frac{3 - \sqrt{5}}{2} \\ \end{bmatrix} \begin{bmatrix} x_{2} \\ y_{2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \end{bmatrix} $$
$$ (1-\frac{3 - \sqrt{5}}{2})x_{2} + (1)y_{2} = 0 \rightarrow (\frac{-1 + \sqrt{5}}{2})x_{2} + y_{2} = 0 $$
$$ y_{2} = -(\frac{1 + \sqrt{5}}{2})^{-1}x_{2} $$
So, for \(\lambda_{2}\), the associated eigenvector is:

\begin{bmatrix} 1 \\ -(\frac{1 + \sqrt{5}}{2})^{-1} \\ \end{bmatrix}
The fraction \(\frac{1 + \sqrt{5}}{2}\) is the golden ratio, \(\varphi\).

So, the two eigenvalues are \(1 + \varphi\) and \(2 - \varphi\), with eigenvetors \(\begin{bmatrix} 1 \\ \varphi \\ \end{bmatrix}\) and \(\begin{bmatrix} 1 \\ -\varphi^{-1} \\ \end{bmatrix}\) respetively.

The eigenvalues represent the amount by which the image is stretched in the direction of each associated eigenvector. So the image is stetched in the direction of \(\begin{bmatrix} 1 \\ \varphi \\ \end{bmatrix}\) by a factor of \(1 + \varphi\), and it is compressed in the direction of \(\begin{bmatrix} 1 \\ -\varphi^{-1} \\ \end{bmatrix}\) by a factor of \(2 - \varphi\). The streaks that appear in the image appear in the directions of the two eigenvectors.

Finally, the modulo \(n\) aspect of the transformation brings the entire area of the new, deformed image back into the square bounds of the original image. Let the length of each square below be \(n\) units. Under the \(A_{cat}\) transformation, the image is stretched such that the yellow, blue, and green sections are outside of the original square. But after the modulo function is considered, a pixel's location in both the x and y directions becomes the remainder of its x-coordinate under the \(A_{cat}\) transformation divided by \(n\) and the remainder of its y-coordinate under the \(A_{cat}\) transformation divided by \(n\), respectively. This results in the yellow, blue, and green pieces being pulled back into the original square.
credit:   Pokipsy76~commonswiki

credit: Pokipsy76~commonswiki

- - - - - - - - - -

What is information, and what does it look like? Does disorder always contain deep-rooted information, and is it just a matter of looking in a different way to find it?

Some of the images produced by Arnold's cat map feature multiple "ghost images" of the original cat. During the transformations, no pixels are added or removed from the image; they are simply rearranged. The two sets of pink ears in image \(30\) of the \(100\space x\space 100\) pixel set are made entirely from the pink pixels in the single set of ears in the original image. The same is true for the \(9\) sets of ears in image \(72\) of the \(162\space x\space 162\) pixel set and the \(441\) sets of ears in image \(12\) of the \(220\space x\space 220\) pixel set. In this way the transformation has, at times, a replicating effect, although the replications necessarily feature reduced resolution because of the fixed number of available pixels of each color.
The perfect restoration of the original image at the end of the iterative period speaks to a preservation of information throughout the series of transformations. These ghostly multiplied reconstructions are glimpses of that preservation amid the chaotic static of the mapping's period.

Notice that for each of the different image sizes, the first application of the transformation (in each case: image \(01\)) is the same. Arnold's cat map begins by transforming the images in an identical way, but the sets of images produced throughout an entire period vary greatly between the three sizes. Patterns appear in each image set which do not appear in the others. Consider the algorithm used to generate the images, and try to determine why the image sets look identical toward the beginning and end of their periords but deviate into unique patterns throughout the middle.
During periods of chaos in any given situation, how can we look for preserved information? When chaos takes over and you feel lost within the static of a situation, when your vision of the present or future becomes hazy, when you lose the context of the moment and your trajectory blurs, look for ghost images of the big picture. Perhaps they are fuzzy, distorted, or simply quite small, but they hint at a larger long-term vision which will eventually reemerge, sharp and clear.
In Pro Tags chaos, linear algebra, information, golden ratio, dynamical systems
1 Comment

Zero Space

March 12, 2015 Lipa Long
The zero space, \(\lbrace0\rbrace\), is a \(0\)-dimensional vector space over every field.

The vector space axioms are satisfied, as vector addition and scalar multiplication become trivial. The basis of the zero space over any field is the empty set \(\lbrace\space\rbrace\).

- - - - - - - - - -

A point is infinitely small. It has no width, no depth, no height -- only location. It is a placeholder, a mark of presence. It is a way of saying: there is something here which exists. This is the fundamental barrier between something and nothing -- a jumping off point for universes and infinities, for you and me, for everything that is.

While a \(0\)-dimensional space implies a total disconnectedness of that space, it is possible for a space of \(0\) dimension to be comprised of more than one point. In other words, some spaces can contain multiple points and still be \(0\)-dimensional. Where is the crossing between the disconnection of the zeroth dimension and the continuum of the first dimension? Are your experiences throughout life discrete or do they exhibit an endless continuity? Do you see your place within reality as being along a continuum with your surroundings or as being in isolation from everything around you?

The zero space is often called the trivial space. The notion that something is trivial in mathematics implies that it is obvious, simple, or uninteresting. We say that a solution or a proof is trivial in order to gloss over it and endeavor upon more complex examples and ideas. But the simple can be profound, and it has its own flavor of insight to offer.
In Pro Tags zero, dimensions, fields, continuum, vector spaces
Comment

Subscribe:

We respect your privacy.

Thank you!

Find us on Facebook:
Donate

Creative Commons License

meditationsinmathematics.com