fractals parti

Upload: himaanshu09

Post on 10-Oct-2015

43 views

Category:

Documents


0 download

DESCRIPTION

Fractals-Basics

TRANSCRIPT

  • 5/19/2018 Fractals PartI

    1/24

    2/22/

    IES-01 Fractals and Application

    Attendance: 10 marks

    Assignments: 10 marks

    Class Performance: 5 marks

    (Weightage: Absent : 0, Late : , Present : 1)

    Geometry : developed as a collection of tools for understanding

    the shapes of nature.

    For millenia, Symmetryhas been recognized as a powerful principle in geometry,

    and in art.

    We begin by reviewing the familiar forms of symmetry, then show that fractalsreveal a new kind of symmetry, symmetry under magnification.

    Many shapes that at first appear complicated reveal an underlying simplicity when

    viewed with an awareness of symmetry under magnification.

    We begin by reviewing the familiar symmetries of nature: symmetry undertranslation, reflection, and rotation.

    We are familiar with three forms of symmetry, exhibited approximately in manynatural and manufactured situations. They are translational, reflection, androtational

    Less familiar is symmetry undermagnification :

    zooming in on an object leaves the shape

    approximately unaltered.

    Here we introduce some basic geometry of fractals, with emphasis on the IteratedFunction System (IFS) formalism for generating fractals.

    In addition, we explore the application of IFS to detect patterns, and also severalexamples of architectural fractals.

    First, though, we review familiar symmetries of nature, preparing us for the newkind of symmetry that fractals exhibit.

    The geometric characterization of the simplest f ractals is self-similarity: theshape is made of smaller copies of itself. The copies are similar to the whole:

    same shape but different size.

    The simplest fractals are constructed by iteration. For example, start with a filled-in triangle and iterate this process:

    For every filled-in triangle, connect the midpoints of the sides and remove themiddle triangle. Iterating this process produces, in the limit, the SierpinskiGasket.

    The gasket is self-similar. That is, it is made up of smaller copies ofitself.

    We can describe the gasket as made of three copies, each 1/2as tall and1/2 aswide as the original. But note a consequence of self-similarity:

    each of these copies is made of three still smaller copies, so we can say thegasket is made of nine copies each1/4 by 1/4 of the original, or 27 copieseach 1/8 by 1/8, or ... . Usually, we prefer the simplest description.

    This implies fractals possess a scale invariance.

  • 5/19/2018 Fractals PartI

    2/24

    2/22/

    More Examples of Self-Similarity

    The gasket is made of three copies of itself, each scaled by 1/2, and two copiestranslated. With slightly more complicated rules, we can build fractals that arereasonable, if crude, approximations of natural objects.

    Later we will find the rules to make these fractals.

    For now, to help train your eye to find fractal decompositions of objects, try tofind smaller copies of each shape within the shape.

    The tree is not so hard, except for the trunk.

    The Mandelbrot set: a different nonlinear transformation gives the most famous ofall fractals.

    Fractal landscapes: With more sophistication (and computing power), fractalscan produce convincing forgeries of realistic scenes.

    Making realistic-looking landscapes isdifficult enough, but doing this so they can

    be stored in small files is remarkable.

    Fractals in nature: after looking at so many geometrical and computer-generatedexamples, here is a short gallery of examples from Nature

    Fractals found in nature differ from our first mathematical examples in twoimportant ways:

    the self-similarity of natural fractals is approximateor statisticaland

    this self-similarity extends over only a limited range of scales.

    To understand the first point, note that many forces scuplt and grow naturalfractals, while mathematical fractals are built by a single process.

    For the second point, the forces responsible for a natural fractal structure areeffective over only a limited range of distances.

    The waves carving a fractal coastline are altogether different from the forcesholding together the atoms of the coastline.

    One way to guarantee self-similarity is to build a shape by applying the sameprocess over smaller and smaller scales. This idea can be realized with a process

    called initiators and generators.

    The initiator is the starting shape.

    The generator is a collection of scaled copies of the initiator.

    The rule is this: in the generator, replace each copy of the initiator with a scaledcopy of the generator (specifying orientations where necessary).

    The initiator is a filled-in triangle, the generator the shape on the right.

    Sierpinski Gasket How can we turn "connect the midpointsand remove the middle triangle" intoinitiators and generators?

  • 5/19/2018 Fractals PartI

    3/24

    2/22/

    Koch curve

    Tents upon tents upon tents ... makes a shape we shall see is very strange, acurve enclosed in a small box and yet that is infinitely long.

    Take as initiator the line segment of length 1, and as generator the shape on

    the right.

    Though its construction is so simple, the Koch curve has some properties thatappear counterintuitive.

    For example, we shall see that it is infinitely long, and that every piece of it, nomatter how small it appears, also is infinitely long.

    Cantor set

    Cut all the tents out of the Koch curve and we are left with something thatappears to be little more than holes. But we can be fooled by appearances.

    Again, take as initiator the line segment of length 1, but now the generator

    is the shape shown below.

    Here is a picture of the Cantor set resolved to the level of single pixels.

    Although so much has been removed that the Cantor set is hardly present atall, we shall find this fractal in many mathematical, and some physical and even

    literary, applications.

    Fractals in the Kitchen

    Cauliflower is a wonderful example of a natural fractal. A small piece of acauliflower looks like a whole cauliflower.

    Pieces of the pieces look like the whole cauliflower, and so on for several moresubdivisions.

    Here is a picture of a cauliflower and a piece broken from it.

    As -------- cook, the boiling batter forms bubbles of many different sizes, giving riseto a fractal distribution of rings.

    Some big rings, more middle-size rings, still more smaller rings, and so on.

  • 5/19/2018 Fractals PartI

    4/24

    2/22/

    Some breads are natural fractals. Bread dough rises because yeast producesbubbles of carbon dioxide.

    Many bubbles are small, some a middle-size, a few are large, typical of thedistribution of gaps in a fractal.

    So bread dough is a foam; bread is that foam baked solid.

    Kneading the dough too much breaks up the larger bubbles and givesbread of much more uniform (non-fractal) texture

    Do fractals have practical applications?How about an invisibility cloak?

    On Tuesday, August 28, 2012, U.S. patentnumber 8,253,639 was issued to Nathan

    Cohen and his group at FracTenna, for awide-band microwave invisibility cloak,based on fractal antenna geometry

    The antenna consists of an innerring, the boundary layer, that

    prevents microwaves from beingtransmitted across the inside of

    this ring. This is the region thatwill be invisible to outsideobservers. Surrounding theboundary layer are sixconcentric rings that guidemicrowaves around theboundary layer, to reconverge atthe point antipodal to where they

    entered the cloak.

    On the left is a magnification of one of the outer rings of the cloak. On the right isthe boundary layer fractal.

    If fabricated at the sub-micron scale, instead of the current mm scale, thistechnology should act as an optical invisibility cloak.

    In late August, 2012, Cohen's group cloaked a person. Interesting times ahead.

    www.fractenna.com

    Now down to work. We learn to grow fractal images, but first must build up themechanics of plane transformations.

    Geometry of plane transformations is the mechanics of transformations thatproduce more general fractals by Iterated Function Systems

    To generate all but the simplest fractals, we need to understand the geometryof plane transformations. Here we describe and illustrate the four features of

    plane transformations

    Affine transformations of the plane are composedof scalings, reflections, rotations, and translations.

    Scalings

    The scaling factor in the x-direction is denoted r.

    The scaling factor in the y-direction is denoted s.

    Assume there are no rotations. Then if r = s, thetransformation is a similarity

    otherwise it is an affinity

    Note the scalingsare always toward

    the origin. That is,the origin is

    the fixed pointofall scalings.

    Reflections

    Negative r reflects across the y-axis.

    Negative s reflects across the x-axis.

    Reflection across both the x- and y-axes is equivalent to rotation by 180about the origin

    Rotations

    The angle measures rotations of horizontal lines

    The angle measures rotations of vertical lines

    The condition = gives a rigidrotation about the origin.Positive angles arecounterclockwise

  • 5/19/2018 Fractals PartI

    5/24

    2/22/

    Translations

    Horizontal translation is measured by e

    Vertical translation is measured by f.

    The matrix formulation of an affine transformation that involves scaling by r in thex-direction, by s in the y-direction, rotations by and , and translations by e andf.

    We adopt this convention:

    scalings first, reflections second, rotations third, and translations last.

    This order is imposed by the matrix formulation.

    Emphasizing this order, the components of a transformation areencoded in tables of this form

    With this encoding of transformations of the plane, we can make fractals usingthe method called Iterated Function Systems (IFS)

    Generating fractals by iterating a collection of transformations is the IteratedFunction System(IFS) method, popularized by Barnsley, based on theoretical workby Hutchinson and Dekking. We use a simple example to see how it works

    Iterated Function Systems

    To illustrate the IFS method, we show how a specific set of IFSrules generates a Sierpinski gasket

    We begin with a right isosceles Sierpinski gasket. Certainly, the gasket can beviewed as made up of three copies of itself, each scaled by a factor of 1/2 in

    both the x-and y-directions

    To determine the translation amount of each piece, takesome point of the whole fractal (the lower left corner, for

    example) and observe where that point goes in eachpiece.

    Here we derive the rules for the right isosceles Sierpinskigasket

    Invariance of the Gasket

    Note that applying all three of these transformations to the gasket gives the gasketagain

    That is, the gasket is invariant under the simultaneous application of these threetransformations.

    What happens if we apply these transformations to some shape other thanthe gasket?

    What happens if we apply these transformations to the resulting shape?

    What happens if we iterate this process?

    Here is an instance of this idea applied to a sketch of a cat

    We observe a sequence of pictures thatconverges to the gasket, independently ofthe starting shape.

    For concreteness we illustrate this converge using the gasket rules. Becauseall the transformations are applied at each iteration, this is called

    the determinisitc algorithm.

    Specifically, suppose T1, ..., Tn are contractions, and P0 is any picture.For example,

    T1(x,y) = (x/2, y/2),

    T2(x,y) = (x/2, y/2) + (1/2, 0),

    T3(x,y) = (x/2, y/2) + (0, 1/2),

    and P0 =

    Generate a sequence of pictures

    P1 = T1(P0) ... Tn(P0)

    P2 = T1(P1) ... Tn(P1)

    ...

    Pk+1 = T1(Pk) ... Tn(Pk)

    This sequence converges to a unique shape, P, the only (compact)shape invariant under the simultaneous application of T1, ..., Tn:

    P = T1(P) ... Tn(P) That is,

    Because of this convergence property, P is

    called the attractor of the IFS {T1, ... , Tn}.

  • 5/19/2018 Fractals PartI

    6/24

    2/22/

    Inverse problems

    finding the transformations to produce a given fractal

    Given a fractal F, the Inverse problem is to find affinetransformations T1, ..., Tn for which

    F = T1(F) ... Tn(F)

    Here we present a method to solve this problem

    Solving the inverse problem takes just two steps.

    1. Using the self-similarity (or self-affinity) of F, decompose F as F = F1 ... Fn, whereeach Fi is a scaled c opy of F.

    2. For each piece F i, find an affine transformation Ti for which Ti(F) = Fi. By "find an affine

    transformation" we mean find the r, s, , , e, and f values.

    Remarkably, solving the inverse problem has only two steps:

    Because the transformations can involve rotations, reflections, and scalingsby different factors in different directions, decomposition is not always assimple a task as it may seem at first. Here are some examples of morecomplicated decompositions.

    Decomposition

    This fractal is an instructive example for people who haveseen the gasket and a f ew of its relatives.

    The primacy of the gasket in early examples of fractalsmakes this shape one of the easiest to recognize.

    The most common response to first seeing this picture is,"It's half a gasket."

    But we don't have rules for making half of a fractal.

    The main lesson here is that we're looking for scaled copies of thewhole shape, and the whole shape is not a gasket.

    Tracing small copies of the outline of the whole shape, perhapscutting them out of paper, is a good way to build up intuition for this

    process. Here's a decomposition.

    Note the bottom left pieceis a reflected copy of the

    whole shape

    In the x-direction we see the familiarCantor middle thirds set; in the y-direction

    just a line segment. Again, look for scaledcopies of the whole shape. Here's

    a decomposition.

    An additional problem is that decompositions never are unique. Here aresome examples of different decompositions of the same f ractal.

    Find a decomposition of this fractal intosmaller copies of itself.

    Here's one decomposition, andhere's another.

    Usually, we try to find a decomposition intothe smallest number of pieces, keeping inmind that each piece must be a contractedcopy of the whole shape.

    We have already seen one decompositionof this fractal.

    When we give up the requirement that thepieces be similar to the whole, new

    possibilities appear.

  • 5/19/2018 Fractals PartI

    7/24

    2/22/

    1. Using the self-similarity (or self-affinity) of F, decompose F as F = F1 ... Fn, whereeach Fi is a scaled c opy of F.

    2. For each piece F i, find an affine transformation Ti for which Ti(F) = Fi. By "find anaffine transformation" we mean find the r, s, , , e, and f values.

    Solving the inverse problem has only two steps:

    (a) Trace the main features of the fractal and cut outsmaller copies of the tracing.

    (b) To allow for reflections, flip the small copies and on theback trace over the lines on the front. Label the frontimage with a small F, to distinguish it from its reflection,and to indicate the original orientation.

    (c) Place the small copies, perhaps rotating or reflectingthem, to make a copy of the original fractal.

    Examples

    This fractal can be decomposed into threepieces:

    Note the top and bottom left pieces havethe same orientation as the entire fractal,

    while the bottom right piece is reflectedacross a vertical line

    Keeping in mind that our transformation rules allow only reflections across thex- and y-axes, some care must be taken with the translation after the reflection

    -0 .5 0 .5

    0.5 0.5

    00 0

    0 1.00.0 0.5

    0.0

    This fractal can be decomposed into threepieces:

    Note the top and bottom left pieces have the same orientation as the entirefractal, while the bottom right piece is rotated.

    Keeping in mind that our transformation rules allow only rotations fixing theorigin, some care must be taken with the translation after the rotation

    Find IFS rules to generate each of these fractals

    r s e f

    .333 .333 0 0 0 0

    .333 .333 0 0 .667 0

    .333 .333 0 0 0 .667

    .333 .333 0 0 .667 .667

    .333 .333 0 0 .333 .333

    r s e f

    .333 .333 0 0 0 0

    .333 .333 0 0 .333 .333

    .333 .333 0 0 .667 .667

    .333 .333 0 0 0 .667

    r s e f

    .5 .5 0 0 0 0

    .5 .5 0 0 .5 0

    .5 .5 0 0 0 .5

    .25 .25 0 0 .75 .75

    r s e f

    .5 .5 0 0 0 0

    -.5 .5 0 0 1 0

    -.5 .5 0 0 .5 .5

  • 5/19/2018 Fractals PartI

    8/24

    2/22/

    r s e f

    -.5 .5 90 90 .5 .5

    .5 .5 180 180 1 .5

    .5 .5 -90 -90 0 1

    r s e f

    .5 .5 -90 -90 0 .5

    .5 .5 180 180 1 .5

    .5 .5 0 0 .5 .5

    When the pieces are not scaled by such obvious amounts, we can find scalingsand rotations by measuring distances and angles

    Splitting the fractal into three pieces is not difficult.

    Often it is convenient to specify a point as the origin of the coordinate system.

    The lower left corner can be a good choice, but in general, use any symmetryavailable, unless there is a compelling reason to do differently

    First, find the scalings and any reflections, that is, the r and s values.

    Second, Find the rotations.

    Third, find the translations.

    Here's the IFS table thatgenerated this fractal.

    With a bit of thought, now we can find an IFS to generate the tree

    First, it is easy to see the four main branches of the tree are scaled copies of thewhole tree.

    The pieces have been pulled apart slightly to emphasize the decomposition.

    The trunk is more complicated.

    Simply shrinking the tree a lot horizontally works for the top of the trunk, but

    makes the bottom of the trunk too thin.

    Two shrunken copies of the tree are needed to make the trunk.

    Here are the IFS rules, color coded to match each transformation to thecorresponding piece of the tree.

    Here is the picture generated by the tree rules, leavingout the second part of the trunk.

    Common Mistakes in Finding Fractals:

    Things that look like fractals but aren't

    First, recalling that no physical fractal can exhibit scaling overinfinitely many levels, nevertheless to make a plausible claim

    of fractality, a pattern must be repeated on at least a fewlevels.

    The "skulls within skulls" of Dali's Visage of Warare repeated three(maybe four) times.

    In Dali's Visage of War(1940) note the eyesand mouth each contain a face, whose eyes

    and mouth each contain a face, many of whoseeyes and mouth each contain a face, an

    obvious, if gruesome, example of fractals in art.

    Dali thought of the Spanish Civil War, in 1940 asource of frightening images to

    him. Descharnes described the painting ashaving "eyes filled with infinite death," referring

    to the recursive effect set in motion by self-similarity.

    Note also the handprint in the lower right of the painting. This is Dali's handprint; thisis the only painting signed with his handprint, a testimony to the power this painting

    had for Dali.

    By contrast, two fists do not make a covincing Cantor set.

    The decomposition of this picture into two pieces, the two fists, does notcontinue to even one more level.

    The fists are not split into smaller pieces. The more levels of the pattern,the more convincing the fractality of the picture.

    Here is an analogous example based onthe Sierpinski tetrahedron.

    This is not plausibly fractal: it is a shape made of fourtetrahedra, but the tetrahedra have no substructure.

    This is more believably fractal:the structure has four levels of

    substructure.

  • 5/19/2018 Fractals PartI

    9/24

    2/22/

    Second, a repeating pattern alone is not sufficient to guarantee fractality.

    A checkerboard or a brick wall has a repeating pattern, but the repetitionis with respect to translation, whereas for fractals the appropriatetransformation is magnification

    Third, repeating a pattern under magnification is not sufficient to guaranteefractality.

    For example, a spiral is symmetric under magnification about its centerpoint, but about no other point.

    Iterating this process, the limiting shape is just a single point.

    In order to produce a f ractal, at each level the decomposition must involve atleast two scaled copies.

    Nested dolls are another example of a non-fractal involving a single scalingtransformation,

    as is the cat bottle.

    The bottle label includes a cat and abottle,

    with a label that includes a cat and abottle,

    with a lable that includes a cat and abottle,

    and so on

    The limit of this process is a singlepoint, not a fractal.

    Inside the largest doll nestles a smaller doll,

    inside that doll nestles a still smaller doll,

    inside that doll nestles an even smaller doll,and so on.

    The limit of this process is a single point, not a fractal.

    Both earrings have picturesof the cow with two earrings,

    both of which have picturesof the cow with two earrings,

    both of which have picturesof the cow with two earrings,

    and so on.

    The limit of the cow picturesis a Cantor set.

    On the other hand, the cow picture is fractal, as would be moreobvious if the cow's left earring were turned toward us.

    Here we study the random IFS algorithm, another way to render IFS images. Thisincludes a careful look at what randommeans.

    To motivate the Random IFS algorithm, we begin withthe Chaos Game.

    Here we observe the apparent effect of randomness is toguarantee the points dance across the picture to begenerated.

    Definition Movement, in random order, toward a collection of points canfill in a polygon, or grow a fractal

    The Chaos Game is played by specifying a number of vertices (a1, b1),(a2, b2),..., and (aN, bN), and a scaling factor r < 1.

    To play the game, start with the point (x0, y0) and pick one of the vertices,say(ai, bi), randomly.

    The point (x1, y1) is the fraction r of the distance between (ai, bi) and(x0, y0).That is,

    (x1, y1) = r(x0, y0) + (1 -r)(ai, bi)

    For example, with four vertices, r = 1/3, and (a2, b2) is the first randomlyselected vertex, we obtain

    (If r = 1, the point (x1, y1) is the same as the initial point(x0, y0); if r = 0, thepoint(x1, y1) is the same as selected vertex(a i, bi).)

    Now pick another vertex, (aj, bj), randomly.The point (x2, y2) is given by

    (x2, y2) = r(x1, y1) + (1 - r)(aj, bj)

    and so on.

    The Chaos Game Plot is the sequence of points (x0, y0),(x1, y1), ...generated this way.

  • 5/19/2018 Fractals PartI

    10/24

    2/22/

    For example of the Chaos Game, take four vertices, thecorners of the unit square, and take r = 1/2

    (a3, b3) = (0, 1) (a4, b4) = (1, 1)

    (a1, b1) = (0, 0) (a2, b2) = (1, 0)

    Suppose the random number generator begins by selecting the vertices in

    this order: 1, 3, 4, 3, 2.

    See the first five points generated by this run of the chaos game.

    If we continue, the points will fill in the square.

    This should be plausible: we start with a point inside theunit square, and each move is half -way between where

    we are and a corner of the square, so we never leave thesquare.

    Because we select the corners randomly, no part of thesquare is preferred over any other.

    So since some parts of the square fill in, all parts must fillin.

    Do you believe this argument?

    What would happen if we used just three vertices (a1, b1),(a2, b2), and (a3, b3)?

    As with the square, we start with a point in the triangle. (In this example, it's on theedge of the triangle, but that's still in the triangle.)

    Each move is half-way between where we are and a corner of the triangle, so wenever leave the triangle.

    Because we select the corners randomly, no part of the triangle is preferred overany other.

    So since some parts of the triangle fill in, all parts must fill in.

    Thus played with three vertices of a triangle, the chaos game should fill in thetriangle. Right?

    Here is the answer.

    Here are more Chaos Game examples. Try to determine the shape

    vertices the corners of a square, r = 1/3

    move the top right vertex to the left, r = 1/2

    five vertices, four the corners of a square,

    one at the center of the square, r = 1/2

    five vertices, four the corners of a square,

    one at the center of the square, r = 1/3.

    The chaos game often is used as an introduction to the more general Random IFS.

    To illustrate its simplicity, frequently the chaos game is performed manually.

    While this does convince of the simplicity of the chaos game, it is less effective inshowing the chaos game will generate fractals. For example, generating 30 points

    manually requires some patience, but does the picture give much hint of a gasket?

    The right picture, consisting of 300 points,is much more convincingly a gasket.

    No one would play the chaos game manually for 300 points, but this is not sucha problem: software can produce a 300 point chaos game in milliseconds (or

    less).

    This is to show that combining the results of 10 people generating 30 pointseach produces as good a picture of the gasket as that of a single person

    generating 300 points

    Purpose To use Random IFS to illustrate an interesting property of randomsequences of numbers

    Material Triangle template, about 12 overhead transparencies, a die (singular ofdice), adhesive tape, a ruler with cm scale, a permanent marking pen, anoverhead projector to display the data.

    Here are 10 samples of 30 points each

  • 5/19/2018 Fractals PartI

    11/24

    2/22/

    Below on the left is the aggregate of these 10 pictures; on the right is the picture of300 iterates of one point. Both are reasonable representations of the gasket.

    Among other things, the 30 point pictures illustrate two less-familiaraspects of the chaos game.

    (1) Manual chaos game experiments require a great deal of patience toproduce a recognizable image.

    (2) The evident variability between short chaos game runs isconsiderable.

    Conclusion The superposition of severalshort runs of the chaos game is at leastvisually indistinguishable from a longer run.

    The Random IFS Algorithm

    Given IFS rules, the Deterministic Algorithm renders a picture of thefractal by

    1. applying all the rules to any (compact) initial picture,

    2. then applying all the rules to the resulting picture,

    3. and continuing this process.

    Regardless of the starting shape, this sequence of picturesconverges to a unique limiting shape, the only (compact) setinvariant under simultaneous application of all the rules.

    The Random Algorithm is another method of rendering the fractal determined by a given

    set of rules, T1, ..., TN.

    Definition and illustration of the random algorithm

    Start with a point (x0, y0) belonging to the fractal, for example, take (x0,y0) the fixed point of one of the Ti.

    A fixed point of a t ransformation T(x,y) is a point left unchanged by thetransformation. That is,

    T(x, y) = (x, y)

    Example 1 T(x, y) = (x/2, y/2)has fixed point (0, 0).

    (x, y) =T(x, y) = (x/2, y/2). Sox = x/2and y = y/2, hencex = 0and y = 0.

    Example 2 T(x, y) = (x/2, y/2) + (1/2, 0)has fixed point (1, 0).

    (x, y)= T(x, y)= (x/2, y/2) + (1/2, 0). Sox = x/2 + 1/2and y = y/2, hencex = 1 andy = 0.

    IfT(x,y) is a contraction, then it has exactly one fixed point.

    A contraction is a transformation T that reduces the distance between everypair of points.

    That is, there is a number r < 1 with

    dist(T(x, y), T(x', y')) rdist((x, y), (x', y'))

    for all pairs of points (x, y) and (x', y').

    Here dist denotes the Euclidean distance between points:

    dist((x, y), (x', y')) = ((x - x')2 + (y -y')2)1/2

    The contraction factor of T is the smallest r satisfying

    d(T(x, y), T(x', y')) rd((x, y), (x', y'))

    for all pairs of points (x, y), (x', y').

    In general, contractions can reduce distances between points by different amounts,depending on the position of the points.

    Here are some special kinds of contractions.

    A similarity reduces all distances by the same number, r < 1. That is,

    d(T(x, y), T(x', y')) = rd((x, y), (x', y'))

    for all pairs of points (x, y), (x', y').

    The transformation T(x, y) = (rx, ry) is an example; its contraction factor is r.

    An affinity reduces distances by different amounts in different directions. For example,

    T(x, y) = (rx, sy),

    where both r < 1 and s < 1, and r and s are different.

    If all the transformations of an IFS are contractions, then iterating the IFS is guaranteed to

    converge to a unique shape.

    Let{n1, n2, ... } be a random sequence of numbers, each from{1, ..., N}.

    Generate a sequence of points

    (x1, y1) = Tn1(x0, y0),

    (x2, y2) = Tn2(x1, y1),

    ...

    We shall see this sequence of points eventually will fill up the fractal to anyprescribed accuracy. For example, here are pictures of the Random Algorithm

    applied to the gasket rules.

    500 points

    5000 points

    (x1, y1) = r(x0, y0) + (1 - r)(ai, bi)

  • 5/19/2018 Fractals PartI

    12/24

    2/22/

    This leads to the question What is random?

    What makes an infinite sequence of digits random?

    Are these random sequences?

    1111111111111111111111111111...1212121212121212121212121212...

    1010010001000010000010000001...

    1415926535897932384626433832...

    What makes thesesequences nonrandom? We

    can give short descriptionsthat exactly specify the entiresequence. This motivates ourdefinition of randomness.

    This sequence is the first 28 digits in the decimalexpansion of pi.

    While the digis of pi may pass many satistical tests forrandomness, we would not call pi random.

    It is a specific number, the ratio of the circumference tothe diameter of any circle, always the same.

    How do you know this is the first 28 decimal digits of pi?

    If you didn't know the first 28 decimal digits of pi, wouldyou think this sequence is random?

    Does our perception of randomness depend on how muchwe know?

    In case you're interested, here are the first 1000 decimal digits of pi

    141592653589793238462643383279502884197169399375105820974

    944592307816406286208998628034825342117067982148086513282

    306647093844609550582231725359408128481117450284102701938

    521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393

    607260249141273724587006606315588174881520920962829254091

    715364367892590360011330530548820466521384146951941511609

    433057270365759591953092186117381932611793105118548074462

    379962749567351885752724891227938183011949129833673362440

    656643086021394946395224737190702179860943702770539217176

    293176752384674818467669405132000568127145263560827785771

    342757789609173637178721468440901224953430146549585371050

    792279689258923542019956112129021960864034418159813629774

    771309960518707211349999998372978049951059731732816096318

    595024459455346908302642522308253344685035261931188171010

    003137838752886587533208381420617177669147303598253490428

    Motivated by our thoughts about these examples, we define an infinitesequence of digits to be random if it cannot be specified completely in any wayshorter than listing the whole sequence.

    How can we generate a random sequence?.

    To make an infinite random sequence of 0s and 1s, toss a coin infinitely manytimes.Each time a heads comes up, put a 1 in the sequence. Each times tails comesup, put a 0 in the sequence.

    So if the coin toss started outheads, heads, tails, heads, tails, tails, tails, heads

    the random sequence would start

    11010001

    Of course, computers don't toss coins, but rather generate pseudorandomnumbers.

    One method starts with the time on your computer's clock, multiplies by a largenumber, divides by another large number, and takes the remainder.

    Random sequences have this property:

    Every infinite sequence of random numbers contains all finite sequences.

    Suppose we have an infinite sequence of 0s and 1s, and nowhere in the sequence do we find000.

    By the definition given, this infinite sequence is not random. Why?

    Suppose we're describing the sequence by listing all its terms.

    Whenever we get to a 00 pair, we don't have to say what the next number is.

    It MUST be 1, because otherwise the infinite sequence would contain 000.

    Consequently, we don't have to list the entire infinite sequence to specify it completely. We sayonly once that the sequence does not contain the triple 000, and then whenever the pair 00

    occurs, we know the next number must be 1.

    Similar arguments show that all finite sequences must occur somewhere (in fact, infinitelyoften) in an infinite random sequence. If any one is missing, we can use this missing sequence

    to describe the infinite sequence without listing all its entriety.

    To understand why the Random and Deterministic algorithms generate thesame pictures, we also need to understand the notion of the address of parts

    of a fractal.

    Addresses are the main tool for relating fractals anddynamics.

    The order of the elements of an address is important, andto some counterintuitive,

    The notion of addresses is familiar in one dimension from the decimal expansion ofreal numbers in the unit interval, [0, 1].

    The left-most digit of the decimal expansion of x tells into which 10th of [0, 1] x falls.

    The second digit tells into which hundredth - that is, which 10th of the 10th - x falls.

    And so on. Here is an illustration

    Addresses in fractals

    To relate this to IFS, we need IFS rules to generate the unit interval. There are(infinitely) many families of such rules, but for ease of interpretation with thedecimal expansion, we use

    Ti(x) = x/10 + i/10

    for i = 0, ..., 9. Then

    Ti(I) is the ith 10th,

    TiTj(I) is the jth 100th of the ith 10th,

    and so on. Note the order of the subscripts. This is the tricky part ofunderstanding addresses. We say

    the digit i is the addressof the ith10th,

    the pair ij is the addressof the jth 100th of the ith 10th,

    and so on. Notice from left to right the address digits specify smaller intervals.

    Addresses are unique.

  • 5/19/2018 Fractals PartI

    13/24

    2/22/

    Addresses of a square

    For concreteness in the two-dimensional case, weconsider the transformations

    T3(x, y) = (x/2, y/2) + (0, 1/2) T4(x, y) = (x/2, y/2) + (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    These generate the filled-in unit square S. That is,

    S = T1(S) T2(S) T3(S) T4(S),

    with overlaps only along edges.

    To each of the 1/2 1/2squares Ti(S) we associate the length 1 address i.

    Each of these squares can be subdivided by iterating this decomposition process. Forexample,

    T1(S) = T1T1(S) T1T2(S) T1T3(S) T1T4(S).

    To each of the 1/4 1/4squaresTiTj(S) we associate the length 2 address ij,

    and so on.

    In order of application, addresses are read right to left: the left-most digit is theindex of most recent transformation applied.

    Because this seems confusing sometimes, we emphasize the order ofaddresses is consistent with the order of composition of functions:

    ij is the address of TiTj(S).

    Another way to think of addresses is as relative coordinates. The 1/4 1/4squares with addresses 11, 12, 13, and 14 are the 1, 2, 3, and 4 parts of 1.

    T3( x, y) = ( x/2, y/2) + (0, 1 /2 ) T4(x, y) = (x/2, y/2) + (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    Longer addresses

    Because each Ti is a contraction, longer addresses specify smaller portions of S.For example, here are the length 3 addresses for the square transformations.

    T3(x, y) = (x/2, y/2) + (0, 1/2) T4(x, y) = (x/2, y/2)+ (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    That longer addreses specify locations with greater accuracy is part of ourcommon experience. Let's abandon geometrical abstraction and turn to our ownsense of place.

    Where are you?

    At some fairly crude level, you are on the earth.

    More precisely, you are in Asia, on the earth.

    Still more precisely, you are in India, in Asia, on the earth

    You are in Uttarakhand, in India, in Asia, on the earth.

    You are in Roorkee, in Uttarakhand, in India, in Asia, on the earth.

    You are at IIT Roorkee, in Roorkee, in Uttarakhand, in India, in Asia, on the earth.

    You are in LHC003, at IIT Roorkee, in Roorkee, in Uttarakhand, in India, in Asia, on the earth.

    You are in the second row, third seat, in LHC 003, at IIT Roorkee, in Roorkee, in Uttarakhand, inIndia, in Asia, on the earth.

    This general kind of description has been familiar to us from childhood, sowe have known for years that a longer address specifies location more

    precisely.

    For fractals with symmetries, different rules can generate the same shape, butwith different addresses for the same region.

    Addresses and Symmetries

    Both IFS tables generate the equilateral Sierpinski gasket.

    For the first table, it is not difficult tosee the length 2 addresses (left).

    For the second table, the reflection in rule2 has an effect on the addresses, forexample, the length 2 addresses

    To see how address 21 winds up in the indicated position, start with the solid triangle S

    This red triangle is T2(T1(S)),hence has address 21

    apply transformation T1,obtaining T1(S).

    apply T2, shrinking andreflecting across the y-axis.

    and translating by 1 horizontally.

  • 5/19/2018 Fractals PartI

    14/24

    2/22/

    To understand why the Random andDeterministic algorithms generate thesame pictures, we tried to understand thenotion of the address of parts of a fractal.

    Now we can show why the Random and Deterministic algorithmsgenerate the same picture

    First, fix a resolution, usually one pixel, to which the picture is to berendered.

    Then we show

    long enough addresses specify regions smaller than apixel,

    that randomness guarantees all finite addresses are visitedby the points generated by the random IFS algorithm, and

    that consequently every pixel of the attractor is visited.

    Diameter goes to 0 under iteration of any contraction map

    The diameter of a set is the maximum distance between any pair ofpoints in the set.

    For example, the diameter of a circle is just the common notion of diameter; thediameter of a square is the diagonal length of the square.

    Somediameters

    Because all the IFS rules are contractions, the diameter of a region ofaddress length N goes to 0 as N goes to infinity. We illustrate this with thefour transformations

    T3(x, y) = (x/2, y/2) + (0, 1/2) T4(x, y) = (x/2, y/2)+ (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    diam(S) = 2

    diam(Ti(S)) = (2)/2

    diam(TjTi(S)) = (2)/4

    and in general

    diam(TiN...Ti1(S)) = (2)/(2N)

    Consequently, diam(TiN...Ti1(S)) 0 as N .

    how the Random IFS algorithm fills, to given resolution, theattractor of the Deterministic IFS algorithm.

    Because

    diam(TiN...Ti1(S)) = (2)/(2N),

    if we take N large enough that

    (2)/(2N) < resolution,

    then the Random Algorithm will fill in the picture to the desired resolution ifall regions of address length N are visited.

    Why should the Random Algorithm do this?

    First, recall the Random Algorithm starts with the fixed point of one of the Ti.

    For definiteness, say we start with the fixed point (x0, y0) of T1.

    The address of this fixed point is 111... = 1.

    Iteration and address shift: how iterationaffects the address of the points generated

    by the Random IFS algorithm.

    Observe

    (x0, y0) = T1(x0, y0) = T1(T1(x0, y0)) = T1(T1(T1(x0, y0))) = ...

    So(x0, y0) belongs to the square with address 1, the square with address 11, thesquare with address 111, and so on.

    Because the diameters of these squares are shrinking to 0, the address 111... =1 corresponds to the single point(x0, y0).

    Alternately, note that the only point left unchanged by repeated application ofT1 is the point with address 111... = 1

    Suppose the first transformation applied is Ti1, the next Ti2, and so on.

    What is the effect of these transformations on the address of the point, and on

    the address length N region in which the point lies?

    point address of the pointaddress length N regioncontaining the point

    (x0, y0) 1 1N

    (x1, y1) = Ti1(x0, y0) i1111... = i1(1) i1 1

    N-1

    (x2, y2) = Ti2(x1, y1) i2i1(1) i2i1(1

    N-2)

    (x3, y3) = Ti3(x2, y2) i3i2i1(1) i3i2i1(1

    N-3)

    ... ... ...

    (xN, yN) = TiN(xN-1, yN-1) iN...i3i2i1(1) iN...i3i2i1

    (xN+1, yN+1) = TiN+1(xN, yN) iN+1iN...i3i2i1(1) iN+1iN...i3i2

    ... ... ...

    So we see each new transformation applied has this effect on the N-digitaddress: discard the right-most digit, shift the remaining N-1 digits one place tothe right, and insert the new address on the left.

    If the transformations are applied randomly, then eventually

    every finite sequence of transformations will occur,

    and in particular, every sequence of length N will be applied.

    Consequently,

    every region with address length N will be visited by the (xik, yik).

    To the specified resolution, the Random Algorithm will generate the same pictureas the Deterministic Algorithm.

  • 5/19/2018 Fractals PartI

    15/24

    2/22/

    For example, suppose we specify the resolution corresponding to addresses of length N =

    3 and we start with the point(x0, y0) with address 1infinity.

    To the specified resolution, (x0, y0) lies in the region with address 111.

    If T2 is the first transformation applied, then resulting point (x1, y1) = T2(x0, y0) lies in the

    region with address 211.

    If T3 is the next transformation applied, then resulting point (x2, y2) = T3(x1, y1) lies in theregion with address 321.

    If T4 is the next transformation applied, then resulting point (x3, y3) = T4(x2, y2) lies in the

    region with address 432.

    Continuing will fill in all the 43 regions of address length 3.

    Example Probability and the Random IFS Algorithm

    In the Random IFS Algorithm the transformations Ti are applied in random order, butthey need not be applied equally often.

    Associated with each Ti is a probability pi, 0 < pi < 1, representing how often eachtransformation is applied. That is,

    when N points are generated, each Ti is applied about N

    pi times.To illustrate the effect of changing the probabilities, we use the IFS

    T3(x, y) = (x/2, y/2) + (0, 1/2) T4(x, y) = (x/2, y/2) + (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    We take

    p4 to range from 0 to 1 in steps of .05,

    and

    p1 = p2 = p3 = (1 - p4)/3.

    Starting with p4 = 0, the first picture is the gasket. Do you see why?

    Very roughly speaking, the probability p i of applying Ti is the fraction of thewhole attractor A occupied by Ti(A).

    The area contraction factor of Ti is

    Ai = risi(cos(i)cos(i) + sin(i)sin(i))

    (This is just the determinant of the matrix formulation of Ti.)

    Note that ifi = i, the area contraction factor simplifies to

    Ai = risi

    Then the probability is given by

    pi = Ai/(A1 + ... + AN)

    Here is a way to find the probabilities that give approximately uniform fill of theattractor.

    Driven IFS

    What happens if we use a non-random sequence in the random IFS algorithm?

    In particular, can we run the random IFS algorithm with a sequence of data -daily closing prices of a stock, or the intervals between your heartbeats, for

    example?

    Will patterns in the IFS picture reveal patterns in the data?

    Because we use a data sequence to select the order in which the

    transformations are applied, we call this approach driven IFS. Thedata drivethe order in which the IFS rules are applied.

    Referring to the chaos game, Ian Stewart asked "Whathappens if the computer's random number generator isreplaced by some more familiar dynamical system?"

    For most of his tests, Stewart used the chaos game fixed points the

    vertices (0, 0), ((3)/2, 1/2), and (0, 1) of an equilateral triangle. Thecorresponding IFS rules are

    Stewart's experiments

    T1(x, y) = (x/2, y/2)

    T2(x, y) = (x/2, y/2) + (0, 1/2)

    T3(x, y) = (x/2, y/2) + ((3)/4, 1/4)

    For reference, a random number generator (1000 points) gives

  • 5/19/2018 Fractals PartI

    16/24

    2/22/

    is produced by iterating the logistic map xn+1 = 4xn(1 - xn).

    is produced from sin(t) + sin(t2).t = 1, 2, ..., 1000.

    is produced from sin(t) + sin(t3) + sin(t5)

    Pictures aregenerated

    sequences ofnumbers, coarse-

    grained into threeequal-size bins.

    Coarse-Graining the Data

    We turn the data y1, y2, ..., yN into a sequence i1, i2, ..., iN of 1s, 2s, 3s, and 4s.This sequence is called the symbol string associated with the data.

    The data values yi often are measured as decimals and because we areconverting these to only four values, the process of turning the yk into ik is

    called coarse-graining.

    The range of y values for corresponding to a symbol is the bin of that symbol.

    equal-size bins Divide the range of values into four intervals of equal length.

    equal weight bins Arrange the bin boundaries so (approximately) the same number of points lie in eachbin.

    zero-centered bins For data whose sign is important, take 0 as the boundary between bins 2 and 3; placethe other boundaries symmetrically above and below 0. Unlike the first two cases, this is a family ofcoarse-grainings depending on the placements of the other two bin boundaries.

    mean-centered bins Take the mean of the data to be the boundary between bins 2 and 3; place the other

    boundaries symmetrically above and below the mean, usually expressed as a multiple of the standarddeviation.

    median-centered bins Take the median of the data to be the boundary between bins 2 and 3; place theother boundaries symmetrically above and below the median, usually expressed as a multiple of therange. Note the equal-weight bins are a special case of this.

    Though there are others, we use five kinds of coarse-graining:

    To illustrate the different kinds of coarse-graining, we use adata set consisting of successive differences of 1000 numbers

    generated by iterating the logistic map.

    Here is the time series y1, y2, ..., y1000 generated by 1000 iterates of the logisticmap with equal-size bin lines drawn, and the corresponding driven IFS

    Here is the time series y2 - y1, y3 - y2, ..., y1000 - y999 generated by successivedifferences of 1000 iterates of the logistic map with equal-size bin lines drawn,

    and the corresponding driven IFS

    Here is the time series y1, y2, ..., y1000 generated by 1000 iterates of the logisticmap with equal-weight bin lines drawn, and the corresponding driven IFS

    Here is the time series y2 - y1, y3 - y2, ..., y1000 - y999 generated by successivedifferences of 1000 iterates of the logistic map with equal-weight bin lines

    drawn, and the corresponding driven IFS

    Driven IFS Rules (Square functions)

    T3(x, y) = (x/2, y/2) + (0, 1/2) T4(x, y) = (x/2, y/2) + (1/2, 1/2)

    T1(x, y) = (x/2, y/2) T2(x, y) = (x/2, y/2) + (1/2, 0)

    This IFS generates the f illed-in unit square. Consequently, any departure fromuniform randomness will be visible through departures from uniform fill of the

    square.

    Here are some examples, all with 10000 points.

    uniform randomsequence

    p1 = p4 = 0.1;p2 = p3 = 0.4

    p1 = 0.1;p2 = p3 = p4 = 0.3

    p1 = p3 = 0.1;p2 = p4 = 0.4

  • 5/19/2018 Fractals PartI

    17/24

    2/22/

    IFS Driven by DNA Sequences

    An example first explored by H. Joel Jeffrey. The genetic code is written in analphabet of four characters: C, A, T, and G.

    A sequence of several billion of these makes each of us. A sequence of 3957

    symbols is needed to encode the formation of the enzyme amylase.

    T G A A T T C A A G T T T G G T G C A A A A C T T G G C A C A G T T A T C C GC A A G T G G A A T G G A G A G A A G A T G T C C T A T T T A A A G T A A A T A

    T A T A C G A T T T T G T C A T T T G T T C T G T C A T A C A T C T G T T G T CA T T T T C T T A A A T A T T G T A A C T T A A A T T G T T G A T T A T T A G T TA G G C T T A T T G T T C A T T T A T C C T T A A T T A A T T A T G T T T T T C AT T T G A T A C A T C A G T C A C C T G A T A A C A G C T G A A A T C T A A A GT A T C A C T T A G T G A G T T T T G T T G G G T T G T G T T A A G T C C A TT A G A G T C T A A G A A T G T T T G C T T A T G G C C T A C T A A A A A T A TG G T A G C A T C C T A A G A A T A G T T A T A C T A A A A A G T G A T C C C TA T A A T A T G A C T A C A C T A G G G A A T T T A T T T A T G C T A C A T T A

    G G G A A T T A A T T G A A A T T T A A A A G T G A A T G T A A A A G C A G A GT T A T A A A T T A A T T T C C A T T C T G T A T T A T A T A A C A T G G A T G T

    C T T A A T T C T C A A G T C C A T A A T G T T A C A A T A A A A T T T T A A A AA T C T A A A A T A A A A T C A A A A C A A A A G A T T A T A T A G T A A A A C C

    T A A A T C T G G A T A A A A T T C C C A T G T A T G T T A T C A T A G A A A A TT G T A C A T A T G T G T A C A T A T A C A A T.

    How can we convert a DNA sequence into an IFSpicture?

    Read the sequence in order, and

    apply T1 whenever C is encountered,

    apply T2 whenever A is encountered,

    apply T3 whenever T is encountered, and

    apply T4 whenever G is encountered,

    Of course, any assignment oftransformations to C, A, T, and

    G can be used.

    On the left is the picture that results from applying these rules to the amylasesequence. Note there are very few points in the region with address GA.(Remember the ordering of the addresses.)

    On the right is a picture that results when 3957 points are generated randomly,except that T4 never immediately follows T1.

    Not an exact match, but certainly suggestive.

    some more examples of IFS driven by DNA sequences.

    First, two examples from ion channels in human cardiac cells.

    potassium channe l sodium channe l

    Here are four examples from chromosome 7 from the University ofWashington human genome site http://www.genome.washington.edu/UWGC/

    g1346a094 g0771a003 G1564A209 gap3

    The interpretations of these pictures is still being investigated. Stay tuned.

    Cyclic Driven IFS What happens if we drive an IFS in arepeating pattern?

    Here we will learn to recognize the visual signature of IFS driven bycyclic data, that is, numbers that repeat a particular pattern.

    The simplest repeated sequence is constant, just repeat the samenumber, for example, 11111... = 1.

    Starting with (1/2, 1/2), applying T1(x, y) = (x/2, y/2) repeatedly produces thesequence of points

    T1(1/2, 1/2) = (1/4, 1/4),

    T1(1/4, 1/4) = (1/8, 1/8),

    T1(1/8, 1/8) = (1/16, 1/16),

    ...

    converging to the lower left corner of the unit square.

    Because it is gotten by applying T1 infinitely many times, the address of this pointis 11111... = 1.

    T1(x, y) = ( x/2, y/2) = ((x+0)/2, ( y+0)/2), the midpoint of ( x, y) and ( 0, 0)

    T2(x, y) = (x/2, y/2)+ (1/2, 0) = ((x+1)/2, (y+0)/2), the midpoint of(x, y) and (1, 0)

    T3(x, y) = (x/2, y/2)+ (0, 1/2) = ((x+0)/2, (y+1)/2), the midpoint of(x, y) and (0, 1)

    T4(x, y) = (x/2, y/2)+ (1/2, 1/2) = ((x+1)/2, (y+1)/2), the midpoint of (x, y) and(1, 1)

    Recall

    we see the cycles 2, 3, and 4 generate sequences of points that converge

    to (1, 0), (0, 1), and (1,1), respectively

    Constant sequences generate sequences of points that converge tothe fixed point of the corresponding transformation.

    Constant sequences generate sequences of points that converge tothe fixed point of the corresponding transformation.

    The sequence of points generated by applying T 1 repeatedly converges to the point withaddress 1111... = 1.

    We show this is the fixed point of T 1, and find its coordinates.

    Fixed point. Say (x*, y*) is the point with address 1. Then

    T1(x*, y*) has address 1(1) = 1.

    BecauseT1(x*, y*) and (x*, y*) have the same (infinite) address, they must be the same point.That is,

    T1(x*, y*) = (x*, y*)

    and (x*, y*) is the fixed pointof T1.

    Coordinates. We see

    (x*, y*) = T1(x*, y*) = (x*/2, y*/2),

    and so (x*, y*) = (0, 0).

    Similar arguments show 2, 3, and 4 are the fixed points of T2, T3, and T4, respectively.

    These points have coordinates(1, 0), (0, 1), and (1, 1), respectively. For example,

    (x*, y*)= T2(x*, y*) = ((x* + 1)/2, y*/2).

    So x* = x*/2 + 1/2 and y* = y*/2, so x* = 1 and y* = 0.

    http://www.genome.washington.edu/UWGC/http://www.genome.washington.edu/UWGC/
  • 5/19/2018 Fractals PartI

    18/24

    2/22/

    The next simplest repeated sequence is a 2-cycleit alternates between two values.

    The next simplest repeated sequence is 121212... = (12).Applying T1 and T2 alternately produces a sequence of points converging to two points

    along the x-axis

    Some more sequences of points generated by other 2-cycles.

    (13) (14)

    limit points of (14) What cycle is this?

    2-Cycle Addresses

    The limiting points in the last example [121212... ] have addresses (12) and (21). Tosee this, recall the relation between the address and the order in which the transformations

    are applied. The sequence 121212... gives points in regions with addresses

    1

    21

    121

    2121

    12121

    212121

    and so on.

    Alternateentries in the

    sequence are

    1

    121

    12121

    1212121

    ...

    (12)

    And

    21

    2121

    212121

    21212121

    ...

    (21)

    2-cycle Coordinates

    The limiting points in the last example have coordinates (1/3, 0)and (2/3, 0).

    To see this, say (x1, y1) is the point with address (12) and(x2, y2) is the point with

    address(21). Then notice

    T2(x1, y1) has address2(12) =

    2(12)(12)(12)(12)...= (21)(21)(21)(21)... = (21)

    BecauseT2(x1, y1) and (x2, y2) have the same (infinite) address,

    T2(x1, y1) = (x2, y2)

    By a similar argument,

    T1(x2, y2) = ( x1, y1).

    combining these two, we see

    T1T2(x1, y1) = ( x1, y1)

    and

    T2T1(x2, y2) = (x2, y2).

    From the first we obtain

    (x1, y1) = T1T2(x1, y1) = T1(x1/2 + 1/2, y1/2) = (x1/4 + 1/4, y1/4),

    so

    x1 = x1/4 + 1/4 and y1 = y1/4

    Solving for x1 and y1, we obtain (x1, y1) = (1/3, 0). A similar argument gives (x2, y2) = (2/3, 0).

    Choice of Initial Point

    How do the limiting points depend on the choice of (0.5, 0.5) as starting point?

    Not at all.

    This should be no surprise: recall the attractor of an IFS does not depend onthe initial picture. (This is a bit different than the situation at hand, however, but

    certainly makes the result plausible.)

    Here is an illustration. We follow the sequences generated from three different

    starting points, all with vertices selected in the order 121212... = (12). Bluelines for one starting point, green for the second, red for the third.

    3-Cycles

    The repeating pattern

    T1 followed by T2 followed by T3

    produces a sequence converging to the three points with addresses

    (321), (132), and(213).

    the three points to which thesesequences converge, alongwith their addresses

  • 5/19/2018 Fractals PartI

    19/24

    2/22/

    3-Cycle Addresses

    The point with address (321) is the fixed point of T3T2T1.

    To see this, say (x, y) is the point with address (321). Then

    T1(x, y)has address 1(321),

    T2T1(x, y) has address 21(321), and

    T3T2T1(x, y) has address 321(321)infinity = (321)

    Because (x, y) and T3T2T1(x, y) have the same (infinite) addresses, they are thesame point.

    Similarly,

    the point with address (132)is the fixed point of T1T3T2, and

    the point with address (213)is the fixed point of T2T1T3.

    3-Cycle Coordinates

    Solving the fixed point equations gives the coordinates of eachpoint. For example,

    (x , y) = T3T2T1(x, y) = T3T2(x/2, y/2)

    = T3(x/4 + 1/2, y/4) = (x/8 + 1/4, y/8 + 1/2)

    so

    x = x/8 + 1/4 and y = y/8 + 1/2

    Solving for x and y gives (x, y) = (2/7, 4/7)

    Similar calculations show

    (1/7, 2/7) is the fixed point of T1T3T2

    and

    (4/7, 1/7) is the fixed point of T2T1T3

    3-Cycle in a different order What happens if we apply the 3-cycletransformations in a different order?

    If we repeat the pattern

    T3 followed by T1 followed by T2

    instead of

    T1 followed by T2 followed by T3?

    Limiting pointsSame as producedby this

    To understand why this is so, write the first several terms of both sequences

    123123123123123123123123123123...

    312312312312312312312312312312...

    The second sequence is just the first shifted two terms to the left.

    123123123123123123123123123123...

    312312312312312312312312312312...

    The first sequence is the same as the second, but starting from

    T2T1(0.5, 0.5) instead of from (0.5, 0.5).

    As with the fractals generated by regular IFS, he re the final pattern does not

    depend on the starting point. We prefer to start with (0.5, 0.5) because thisis the most neutral choice.

    Can you find an order of cycling through 1,2, and 3 that convergesto a different triple of points?

    The sequence

    T2 followed by T1 followed by T3?

    produces sequences of points converging to the points

    (1/7, 4/7), (4/7, 2/7), and (2/7, 1/7)

    shown below on the left.

    For comparison, the previous 3-cycle is shown on the right.

    The same points are produced by any cyclic permutationof the original sequence.For example, the cyclic permutations of

    T2 followed by T1 followed by T3,

    are

    T1 followed by T3 followed by T2, and T3 followed by T2 followed by T1.

    N-Cycles

    The number of points to which the sequence converges is thelength of the (shortest) repeated pattern.

    The reason for shortestis that the

    sequences (123)

    and (123123)

    converge to the same threepoints.

  • 5/19/2018 Fractals PartI

    20/24

    2/22/

    Driven IFS with Forbidden Combinations

    Gaps in the driven IFS picture indicate combinationsof transformations that do not occur.

    To understand the effect of forbidden combinations, we usethe addresses of regions of the unit square S.

    We begin with three simple examples.

    Forbid a transformation: T4 never occurs

    If T4 is never applied, then no points land in the subsquare S4 withaddress 4. That is, the upper right subsquare is completely empty.

    Because the subsquare S4 contains no points, the subsquares

    T1(S4) = S14, T2(S4) = S24, and T3(S4) = S34 contain no points.

    Continuing, the subsquares T1(S14) = S114, T1(S24) = S124, T1(S34) = S134, ...,and T3(S34) = S334 contain no points. That is,

    if i=4, j=4, or k=4, the subsquare Sijk contains no points (below left).

    Similarly,

    the subsquare Sijkm contains no points if any of i, j, k, or m is 4 (below right).

    The result of continuing this process is clear:

    if T4is never applied, every square whose address contains a 4 is empty.

    With this restrction, we see the IFS generates a right isosceles Sierpinski gasket.This is no surprise, because the IFS {T1, T2, T3} generates a right isoscelesSierpinski gasket.

    Forbid a pair: T4 never immediately follows T1

    If T4 never immediately follows T1, then no points land in the square T4(S1) = S41.

    Because S41 contains no points, the subsquares T1(S41) = S141, T2(S41) = S241, T3(S41) = S341,and T4(S41) = S441 contain no points

    Continuing, here are the pictures showing the subsquares of address length 4(left) and 5 (right) containing no points.

    The pattern should be clear:

    no points lie in any subsquarewith address containing 41.

    Another Example of forbidden pair

    We impose these restrictions:

    T1 never immediately follows T4

    T2 never immediately follows T3

    T3 never immediately follows T2

    T4 never immediately follows T1

    What are the first few generations of the IFS with these restrictions?

    First, note these restrictions imply no points land in the squares with addresses14, 23, 32, and 41. That is, the shaded squares will contain no points

    Continuing on to the length 3 address squares, we seeevery square with address including 14, 23, 32, and 41 will

    contain no points. For example,

    Here is a movie showing the first few iterates of the driven IFS with theserestrictions. In contrast to the diagrams above, here the nonempty regionsare shaded

    These examples illustrate the general situation: if some combination of Tin...Ti1 oftransformations never occurs, then all subsquares with addresses containing the

    sequence in...i1 must be empty.

    The transformation combinations that never occur are called forbiddencombinations. For example, if T4 never immediately follows T1, we say 41 isa forbidden pair.

    Graphical Representation

    For those Driven IFS determined completely by forbidden pairs, a compactrepresentation of the IFS can be given by a graph showing the allowed pairs.

    The graph has four vertices, one for each T i,

    and an edge from vertex i to vertex j if Ti can be be followed immediately by Tj.

    For example, the Driven IFS with the single forbidden pair 41 has this graph:

    Because T1 cannot be immediately followed by T4, the graph has no arrowfrom vertex 1 to vertex 4.

    All other combinations are allowed, so all other pairs of edges are connectedby arrows.

  • 5/19/2018 Fractals PartI

    21/24

    2/22/

    Examples of Driven IFS with Forbidden Pairs

    Different combinations of forbidden pairs can generate some very interestingpictures.

    Finally we move on to driving IFS by numerical data series

    Not all data are written in an alphabet of four symbols. Can we adapt the drivenIFS method to study a wider variety of data, daily closing stock prices, for

    example? The answer is yes. As a first step, we must coarse-grain the data.

    Driven IFS and Data Analysis

    Coarse-Graining the Data

    We turn the data y1, y2, ..., yN into a sequence i1, i2, ..., iN of 1s, 2s, 3s, and 4s,called the symbol string associated with the data.

    Often the data values yi are measured as decimals and because we areconverting these to only four values, the process of turning the y k into ik iscalled coarse-graining.

    The range of y values for corresponding to a symbol is the bin of that symbol.

    Though there are others, we use five kinds of coarse-graining:

    equal-size bins Divide the range of values into four intervals of equal length.

    equal weight bins Arrange the bin boundaries so (approximately) the same number of points lie in each bin.

    zero-centered bins For data whose sign is important, take 0 as the boundary between bins 2 and 3; place the otherboundaries symmetrically above and below 0. Unlike the first two cases, this is a family of coarse-grainings dependingon the placements of the other two bin boundaries.

    mean-centered bins Take the mean of the data to be the boundary between bins 2 and 3; place the other boundaries

    symmetrically above and below the mean, usually expressed as a multiple of the standard deviation.

    median-centered bins Take the median of the data to be the boundary between bins 2 and 3; place the other

    boundaries symmetrically above and below the median, usually expressed as a multiple of the range. Note the equal-weight bins are a special case of this.

    Equal-Size Bins

    One method of converting the measured data y1, y2, ..., yN into a symbol string i1, i2,..., iN is first to find the range of the values, that is, the interval between the maximum

    (max) and the minimun (min) of the yi.

    Next, divide the range (min, max) into four equal-size bins:

    yk lies in bin1 if min yk < min + .25(max -min)

    yk lies in bin2 if min + .25(max - min) yk < min + .50(max -min)

    yk lies in bin3 if min + .50(max - min) yk < min + .75(max -min)

    yk lies in bin4 if min + .75(max - min) ykmax

    The numbers separating the bins are called bin boundaries, B1, B2, and B3:

    B3 = min + .75(max -min)

    B2 = min + .50(max -min)

    B1 = min + .25(max -min)

    Each yk lies in one of these bins, so we canconvert the sequence y1, y2, y3 ... into the

    symbol string i1, i2, i3 ... of 1s, 2s, 3s, and4s associated with the data.

    ik = 4 if yk lies in bin4

    ik = 3 if yk lies in bin3

    ik = 2 if yk lies in bin2

    ik = 1 if yk lies in bin1

  • 5/19/2018 Fractals PartI

    22/24

    2/22/

    Equal Weight Bins

    Another method of converting the measured data y1, y2, ..., yN into a symbolstringi1, i2, ..., iN is to select the bin boundaries B1, B2, and B3 so

    one quarter of the yk satisfy ykB1,

    one quarter of the yk satisfy B1 < ykB2,one quarter of the yk satisfy B2 < ykB3, and

    one quarter of the yk satisfy B3 < yk.

    Because each bin contains the same number of points, this is called an equal-weight binning.

    One way to produce an equal-weight binning is to sort the list of yk in increasingorder. Then take

    B3 the element one-quarters of the way from the top of the sorted list,

    B2 the element of one-half of the way from the top of the sorted list, and

    B1 to be the element three-quarter of the way from the top of the sorted list.

    Equal-weight bins can be calleda maximum entropy partition.

    Zero-Centered Bins

    While there is no reason to think 0 holds any special importance for the data set

    y1, y2, ..., yN,

    it may for the first differences

    y2 - y1, y3 - y2, ..., yN - yN-1.

    For instance, a positive first difference means the data values are increasing, a negativefirst difference means the data values are decreasing. This has clear significance forfinancial data.

    Take B2 = 0, and in the absence of other information, take B1 and B3 symmetricallyspaced about B2 = 0.

    Usually express B1 and B3 as a fraction of the range of the differences.

    Mean-Centered Bins

    Another way to bin the data is to set B2 = m, the mean of the data values, and set B1 andB3 to some fraction of or multiple of the standard deviation.

    Median-Centered Bins

    Another way to bin the data is to set B2 = m, the median of the data values, and setB1 and B3 to some fraction of the range.

    Comparing Stocks and Indices by Driven IFS

    In his project, Joseph Thornton investigated some variants on driving IFS byfinancial data.

    He use daily closing prices for a two year period, about 500 data points.

    Daily Percentage Changes

    First we look at two years' data for several stocks, using daily percentagechanges to set the bin boundaries. Observing that the market tends to

    fluctuate most often between +2.5% and -2.5%, as a first experimentThornton looked at daily changes in stock prices and used zero-centeredbins with these bin boundaries

    B3: the price increases by > 2.5% its previous value

    B2: the price remains unchanged

    B1: the price decreases by < -2.5% its previous value

    CitigroupAmerican

    InternationalGeneral Electric

    Dell Sonus Networks Qwest

    Tyson FoodsColgate-Palmolive

    Lucent

    Here are the driven IFS

    Note that for Citigroup, AI, GE, Tyson, and Colgate-Palmolive, the predominant feature is strongmotion along the2-3 diagonal, indcating most changes are within 2.5% of the previous price.

    Another feature, especially strong in the GE graph, is the absence of points along the 1-4 diagonal. Although there are points in address 1 and 4 (even in address 11 and 44), there arevery few consecutive moves of > +2.5% followed by< -2.5%, and vice versa.

    Contrast this with Sonus, where most of the activity in along the 1-4 diagonal, indicating relatively

    wild swings in closing price. The heavy cluster of points in corner 1 does not speak of a successfulstock.

    Roughly speaking, the older economy companies - Citigroup, Tyson, Colgate-Palmolive, GE, andAIG - have stronger2-3 diagonals, indicating less volatile behavior.

  • 5/19/2018 Fractals PartI

    23/24

    2/22/

    Next we experiment with uniformizing the driven IFS by scaling the binboundaries with the stock's value.

    scaling

    The factor of a stock is the volatility of the stock relative to that of the market.

    > 1 means the stock is more volatile than the market.= 1 means the stock and the market are equally volatile.

    < 1 means the stock is less volatile than the market.

    The 1-4 diagonals of the tech stocks reflects larger daily percentage changes, sowe would expect higher volatility.

    As a quantitative test of this, Thornton scaled the bin boundaries with each stock's. For example, Qwest has = 2.15, so the first and third bin boundaries are setat2.15x2.5% = 5.38%above and below 0.

    Here are the rescaled driven IFS (left), each grouped with its original (right) forcomparison.

    Citigroup: = 1.33

    American International: = 0.85

    General Electric: = 1.08Dell: = 1.83

    Sonus: = 4.95Tyson: = 0.47

    Note that in general rescaling the bin boundaries by the stock's makes thedriven IFS look much alike. Note particularly the change in the Sonus IFS.

    The obvious exception is Tyson, whoseis so small that the rescaling puts manymore points into bins 1 and 4.

    IFS Driven by Texts

    Driven IFS is a potentially interesting tool for analyzing patterns in data, but weneed to be careful with how the data are binned.

    A text is a string, but in an alphabet of more than four symbols. How can weconvert this into a string in an alphabet of four symbols?

    One possibility is to treat words as the fundamental unitsof the text, and assign bins by parts of speech. This has

    an obvious problem: distinguishing how much of thedriven IFS structure is due to the author's style, and how

    much to grammatical constraints.

    Another choice is to treat letters as the fundamentalunits and assign bins by ignoring some letters, or groupingthe letters together. Of course, any choices must be

    justified in a way reflecting properties of the text. This isnot an easy problem.

    Phonological analysis

    As we have seen, suitable ways of binning texts are elusive.

    Binning by phonemes, individual sounds making up the words of a spoken language, is apromising direction, especially because phomemes can be divided into a small collection ofcategories.

    In her spring 2003 project, Emily Runde studied phonological properties in two of T. S. Eliot'spoems, The Love Song of J. Alfred Prufrockand Hollow Men.

    Do these poems exhibit similar phonological patterns? What about patterns in LewisCarroll'sJabberwocky, which contains many fabricated words?

    Will Elliot's texts, Tradition and the Individual Talentfor example, reveal different patterns?

    Category Examples Description

    vowels a, e no vocal tract friction, most

    sonorous

    glides y, w consonantal forms of vowels iand u, slightly less sonorous

    liquids l, r friction caused by the tongue

    nasals m, n, ng slightly more friction,articulated through the nose

    obstruents all other consonants varying degrees of vocal tractconstriction

    fricatives s, f, th partial vocal tract constriction

    plosives p, t, k air buildup with completevocal tract closure, then

    released in a short burst

    affricatives ch, dg fricative and plosive soundcombined

    syllabic boundary

    word boundary word boundaries supersedesyllable boundaries

    Here are the phonological categories used.

  • 5/19/2018 Fractals PartI

    24/24

    2/22/

    First she analyzed these pieces using a four bin IFS with the familiar transformations

    Love Song of J. Alfred Prufrock Hollow Men

    Jabberwocky Tradition and the Individual Talent

    The order of the transformations is determined by the order in whichthe phonological categories occur in the text, according to this correspondence.

    T3: liquids and nasals T4: all obstruents

    T1: vowels T2: glides

    Some analysis

    In all four the squares 22, 32, and 42 are empty, indicating only a vowel can follow a glide.

    In all four the squares 111, 234, 334, and 434 are empty.

    111: three successive vowels do not occur.

    234, 334, 434: a vowel must follow the combination obstruent then liquid or nasal.

    Prufrock, Jabberwock, and Traditionall have 212 empty, while Hollowdoes not.Consequently, glide-vowel-glide is possible, but uncommon, at least among these samples.

    Traditionhas 433 empty, while Prufrock, Jabberwock, and Hollowdo not.(In Jabberwockwhy does the point on the boundary between 433 and 344 belong to 433?) Is

    this combination a phonological construction that distinguishes poetry from prose?

    Next she analyzed these pieces using a nine bin IFS.

    Love Song of J. Alfred PrufrockHollow Men

    Jabberwocky Tradition and the Individual Talent

    T7:plosives

    T8: syllabicboundaries

    T9: wordboundaries

    T4: nasalsT5:fricatives

    T6:affricatives

    T1: vowels T2: glides T3: liquids

    Some analysis

    In all four the squares 22, 32, 42, 52, 62, 72, 82, and 92 are empty: only a vowel can follow a glide.

    In all four, 23 and 26 are empty: a glide cannot follow a liquid or an affricate.

    In all four, among *8 (for * = 1, ..., 7) the most filled in are 58 and 78: syllables begin with fricatives orplosives more often than with other phonemes.

    Among *9 the most filled are 19, 59, and 79: words begin with vowels as often as with fricatives or plosives.

    In all four, among 8* the most filled is 81: syllables are more likely to end with vowels.

    The common sonority profile is an arc: often a syllable begins with a low sonority phoneme, followed by oneof higher sonority, then lower.

    831 is more densely filled than 871: after a vowel a syllable is more likely to end in a nasal than in a plosive.

    781 is densely filled: after a syllable ends with a vowel the next syllable can start with a plosive.

    The rules governing nasals and liquids are similar: in each of the four plots, the pattern in square 3 is similarto that in square 4.

    In each of the four plots, the pattern in 5 is similar to that in 7 (6 also is similar, though much less filled),indicating that the three subcategories of obstruents behave similarly.

    Although 57 and 75 are fairly populated, 55 and 77 are not, and in general the pairs ii are (nearly) indicatingthat only rarely do two phomemes of the same phonemic category occur adjacently within a syllable.

    Finally, the patterns in 8 and 9 are similar, thought 9 is more filled, indicating many one-syllable words.Although the sonority profile refers to a syllable rather than to a word, this shows the same pattern of sonoritynear a syllable boundary is repeated at word boundaries.

    Nine Bin IFS Rules

    T7(x,y) = (x/3, y/3) + (0, 2/3) T8(x,y) = (x/3, y/3) + (1/3, 2/3) T9(x,y) = (x/3, y/3) + (2/3, 2/3)

    T4(x,y) = (x/3, y/3) + (0, 1/3) T5(x,y) = (x/3, y/3) + (1/3, 1/3) T6(x,y) = (x/3, y/3) + (1/3, 2/3)

    T1(x,y) = (x/3, y/3) T2(x,y) = (x/3, y/3) + (1/3, 0) T3(x,y) = (x/3, y/3) + (2/3, 0)

    These transformations generate the unit square by dividing it into ninesubsquares, each of1/3 the width and height of the unit square.

    Here are the length 1 addresses.

    Here are the length 2 addresses.