tag:blogger.com,1999:blog-28569342128082319202017-10-21T21:10:40.363-05:00ΑΡΙΘΜΟΣ(arithmos: Greek for number)Darren Irvinenoreply@blogger.comBlogger82125tag:blogger.com,1999:blog-2856934212808231920.post-48433530246712564142017-10-21T21:09:00.000-05:002017-10-21T21:10:40.393-05:00Remembering GravityI was recently testing out my vertical and was disappointed in the results. I was also mindful of my current endeavour to slim down a bit. And the thought came to mind, If I lost 20lbs, what would the impact be on my vertical jump? My intuition at that moment was that it should easily be possible to calculate an estimate. But as I thought about it some more, I realized I had oversimplified the matter and I remembered how gravity works.<br /><br />There are two main pieces to the puzzle:<br /><br /><ol><li>Determine the effect on take off velocity.</li><li>Determine the impact of that change in velocity to vertical jump (the easy part).</li></ol><br /><h3>Introduction to the Easy Part</h3>The classic equations of kinetics will help us with 2, in particular the equation that describes the relationship between displacement and time under constant acceleration. Earth's gravity exerts an amount of force on all bodies in proximity to it which results in the same downward acceleration (in the absence of other forces). The force isn't constant for all objects, but the acceleration is. If you start from a constant acceleration of \(g = -9.81 m/s^2\) and you know the initial vertical velocity of your take off, you can arrive at the classic velocity and displacement formulas:<br />$$v(t) = v_0 + g t$$<br />$$d(t) = d_0 + v_0 t + \frac{1}{2} g t^2,$$<br />where \(d_0\) is initial displacement and \(v_0\) is the initial (vertical) velocity. I was too lazy to look that up, so I just derived it. Note that I have put the minus on the gravity and not in the formulas. We can simplify this down to the variables we care about based on the fact that the velocity at peak (time \(t_p\)) is 0:<br />$$d(t_p) - d_0 = -\frac{v_0^2}{2 g}.$$<br />You'll notice that it doesn't matter how much you weigh for this part of the discussion. It matters how fast you are moving vertically at the moment your feet leave the ground. Now, in order to measure your vertical for the purposes of determining your initial velocity, you need to have a starting reach and a final reach. Your starting reach \(d_0\) will be taken on your tiptoes, because your foot extension is still imparting force to the ground. (I don't enough about this topic to say how critical this phase is, but I did a little self experimentation and my limited self-awareness suggests I'm using foot extension as a part of my jump.) You're going to need your peak reach on the tape as well. My displacement under this manner of measurement is 17" = 0.43m (but for the record my standard reach vertical is 20"). My initial velocity is about \(2.91m/s\). Does this help with determining my vertical if I lost 20lbs? Not really. But it's a cool number to know.<br /><br /><h3>1. The Hard Part</h3>The math we will do here is not difficult, but the justification is a little hazy. We need some kind of a model that explains the relationship between my weight and my initial velocity in a vertical jump. I'm going to model the process of jumping as having near constant acceleration. This is probably bogus, but I want a number. Let's put all the cards on the table:<br /><br /><ol><li>Near constant vertical acceleration.</li><li>The force production of the legs is the same for the heavy me and the light me.</li></ol><div>All of these assumptions are the basic ingredients of a creamy fudge. Assumption 2 is probably okay for small enough changes in weight, but note that large changes in weight would invalidate it. You can convince yourself of the limited applicability of assumption 2 by attempting to do shot put with a softball and a 12 lb shot. You'll be underwhelmed by the distance the softball goes if you haven't already intuited it.</div><div><br /></div><div>Measuring the distance of the vertical acceleration is a bit of a problem. Consider that your feet are not accelerating for the first part of the movement, but your head is. We're going to need to know the distance that the center of gravity is moved through the movement. There is some research that has gone into the question of the center of gravity of a human body in different positions. Here's a <a href="https://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&cad=rja&uact=8&ved=0ahUKEwirwb3yyoLXAhUk04MKHZqYBLsQFghKMAU&url=http%3A%2F%2Fwww.sfu.ca%2F~stever%2Fkin201%2Flecture_outlines%2Flecture_5_cog.pdf&usg=AOvVaw3EijM-kimMaSdakoyxBpUO">quick and easy to follow presentation of the topic from Ozkaya and Nordin of Simon Fraser University</a>. What's missing is the magic numbers I want. </div><div><br /></div><div><a href="https://www.physio-pedia.com/Centre_of_Gravity">Physiopedia</a> says the CoG of a person in the anatomical position (like you're dead on a board) "lies approximately anterior to the second sacral vertebra." <a href="https://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=25&cad=rja&uact=8&ved=0ahUKEwiOz_OD0YLXAhWj14MKHZG8ChYQFgiVATAY&url=https%3A%2F%2Fwww.asu.edu%2Fcourses%2Fkin335%2Fdocuments%2FLinear%2520Kinetics.pdf&usg=AOvVaw0nXp9A0QGo3rwdANxqN5ID">Another resource</a> (from Peter Vint, hosted on Arizona State University website) indicates that this is located at about 53-56% of standing height in females and 54-57% of standing height in males. I'm going to need to tweak with this to get my take off CoG. I also need my bottom CoG height. I'm going to be a little more vague about coming up with that number. I'm going to eyeball it with a tape measure and reference some google pictures. </div><div><br /></div><div>55.5% of my height would be about 39". Add to that the height of going on tiptoes (3") to get 42" height of CoG at take off. I estimate my initial CoG at the bottom of my crouch as 27". So, I have 15" = 0.38m within which to accelerate. We need to bring some equations together here:</div><div>$$v_f = a t$$</div><div>$$d = \frac{1}{2} a t^2,$$</div><div>where \(d\) is the displacement of my CoG from crouch to take off, \(a\) is my net acceleration and \(v_f\) is my velocity at the end of the jumping phase (take off). This is the same number as my initial velocity (earlier called \(v_0\)). Solving for \(a\) yields</div><div>$$a = \frac{v_f^2}{2 d} = 11.1 m/s^2.$$</div><div><br /></div><div>The net force acting on me includes force of gravity \(F_g\) and the floor pushing me up \(N\) in response to gravity and the force I am applying to the floor to initiate the jump.</div><div>$$F_{NET} = N + F_g$$</div><div>The force of the floor pushing me up is in response to the force I am applying and the force of gravity. So, \(N = -F_g - F_a\). The result is that the net force on my CoG is the force I am applying (in the opposite direction; I am pushing my feet down on the floor but the net force is upward). </div><div><br /></div><div><i>(When it comes to the mass involved there is a bit of a conundrum. I am currently about 225 lbs. But the force I generate as part of the jump is not acting to directly accelerate all 225 lbs. Certainly the floor is pushing back on all 225lbs of me though. From the perspective of the force I am applying, I could probably exclude the lower leg. I found an estimate for men of about 4.5% per lower leg. So, knock off 20 lbs. Then the mass I am accelerating is about 205 lbs. However, we started down this path talking about the CoG and we included the lower leg in this. Also, we are thinking in terms of external forces in our basic force equation. If we want to get super detailed, we will need to think of internal forces and that can get complicated. In the end, the calculation appears only to be affected by fractions of an inch. I'm hunting for elephants so I will not make a careful search of the short grass.)</i></div><div><br /></div><div>Using my full weight in kg, let's see what my acceleration would be if I lost 10kg. I'm currently about 102kg. Then the projected acceleration can be calculated from</div><div>$$F_a = m_i a_i = m_f * a_f$$</div><div>$$a_f = \frac{m_i a_i}{m_f} = \frac{102 kg \cdot 11.1 m/s^2}{92 kg} = 12.3 m/s^2$$</div><div>Now we can get my new take off velocity:</div><div>$$v_f = \sqrt{2 d a} = \sqrt{2 \cdot 0.38 m \cdot 12.3 m/s^2} = 3.06 m/s$$</div><br /><h3>2. The Easy Part</h3>My new vertical, D, is an easy calculation with our new take off velocity.<br />$$D = -\frac{v_f^2}{2 g} = -\frac{(3.06 m/s)^2}{-9.81 m/s^2} = 0.477m = 18.8"$$<br />So, if I lose about 22lbs, I will have almost a 2" higher vertical without developing any further explosiveness in my legs.<br /><br />I believe I will invest in other strategies as well.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-20547402042147223792017-09-09T18:55:00.000-05:002017-09-11T20:13:04.358-05:00Shortcuts in Teaching Mathematics Lead to Quicker Dead EndsThere are lots of times in life where shortcuts lead to efficiency. Efficiency is great, provided it is actually effective at achieving your goals (or the goals you should have). On the other hand, you can sometimes efficiently achieve a short term goal only to find yourself at a dead end later on.<br /><br />If you're in a maze, not every step closer to straight-line proximity with the cheese is necessarily actually getting you closer to eating the cheese. In a maze, you can be right beside the cheese with just a single wall between you and the cheese and you might be as far away as possible from being able to eat the cheese. Sometimes you need to head in a direction that seems to take you further from your goal, in order to be closer to achieving it. Do you want to have a really strong smell of cheese or do you want to actually eat cheese?<br /><br />Take weight lifting as an example. Improving your technique often takes you back to lighter weight. Your goal is to lift lots, right? Well, lighter weight seems like the wrong direction if you're thinking naively. But improved technique will take you further along a path that can actually lead to "<i>having </i>cheese" rather than just "<i>smelling</i> cheese", both because you will be less prone to injuries which will set you back and because you will be training your muscles to work along a more efficient path. So, suck it up!—and reduce the weight if you have to.<br /><br />What follows is a series of common math shortcuts and suggestions for avoiding the pitfalls of the shortcuts (like, avoiding the shortcuts 😜). Some of these are statements that arise from students who hear a teacher express a proper truth the first time. But, when asked to recall the statement, the student expresses an abbreviated version of the statement that differs in a way that makes it not true anymore. Sometimes the student really did understand, but was experiencing a "verbally non-fluent moment" or just didn't want to expend the energy to explain. The teacher, trying to be positive, accepts this as a token of the student paying attention and then gets lazy themselves and doesn't add a constructive clarification.<br /><br />In any event, the quest for simplicity and clarity has pitfalls. Merely making something <i>appear</i> simple is not a sure path to understanding. Go deeper.<br /><br /><h3>Two Negatives Make a Positive</h3><b>Why it's bad</b>: It is a false statement. If I stub my toe and later hit my thumb with a hammer it is not a good thing, it is cumulatively bad.<br /><br /><b>How it happens</b>: Quick summary (not intended to be fully accurate) and humor. Unfortunately, the quick summary can supplant the real thing if students are not reminded consistently about the full version of the statement (below).<br /><br /><b>Better</b>: A negative number multiplied by a negative number, produces a positive number.<br /><br /><b>Expansion</b>: Negative numbers are a way of expressing a change in direction, as in, (sometimes figuratively) a 180° change in direction. $10 is something you want to see coming your way. -$10 can go to someone else.<br /><br />With this in mind you can explain that subtraction is equivalent to adding a negative number. A negative number is equivalent to multiplying the corresponding positive number by -1. The negative on the front of the number means you move to the "left" on the number line (a metaphor I think is sound). To put it a little differently, if you pick a random number on the number line and multiply it by -1 it is like mirroring it about zero (0). Also, multiplying by -1 is equivalent to taking its difference with zero. That is,<br />$$0-z=-1 z = -z,$$ which are more or less a part of the definition and development of negative numbers in a technical sense.<br /><br />Displacement is one of the easiest illustrations to use. Suppose I am standing on a track that has chalk lines marked from -50 m to 50 m. I am standing at the 0 m mark facing the end with positive numbers. Instructions show up on a board which are in a few different forms:<br /><ol><li>Advance 10 m. </li><li>Go back 10 m.</li><li>Move 10 m.</li><li>Move -10 m.</li></ol>It is easy enough to see that 1 is equivalent to 3 and 2 to 4. Following a list of such instructions will result in a predictable finishing position which can be worked out one step at a time in order or can be put into a single mathematical expression. Commutativity and associativity can be explored by comparing differences in order applied to the mathematical expression as well as the list of expressions. I can reorder the sequence of instructions and produce a new expression that gives the same result or I can tweak the expression an spit out revised instructions that parallel the revised expression and produce the same result. Arithmetic is intended to express very practical things and so if something you do with your math expression causes a disconnect with the real life example, you are guilty of bad math. The problem will not be with commutativity or associativity, but with the implementation of it. It is worth investing a good deal of time on this, but it will probably have to be brought up again and again when you move on to other things because, I think, students sometimes get slack on their efforts at understanding when something appears too easy.<br /><br />The next step is to understand that reversing direction twice puts you back on your original direction. We can see how this works on the track by working with the displacement formula:<br />$$d = p_f - p_i,$$ where \(p_f\) is final position and \(p_i\) is initial position. It is easy to illustrate on a chalk/white board that if I start at the -20 m mark and travel to the 50 m mark I will have traveled 70 m, not 30 m. Using the formula to get there will require combining the negative signs in a multiplication sense.<br /><br />I would love to have a simple example that involves two negative non-unity numbers, but while I've done this arithmetic step countless times in the solving of physical and geometrical problems, I have trouble isolating a step like this from a detailed discussion of some particular problem and still retaining something that will lead to clearer understanding than the displacement example.<br /><br /><h3>Heat Rises</h3><b>Why it's bad</b>: It is a false statement.<br /><br /><b>How it happens</b>: Unreasonable laziness.<br /><br /><b>Better</b>: Hot air rises (due to buoyancy effects).<br /><br /><b>Expansion</b>: The general way that heat transfer works is not related to elevation change. A more accurate but still cute, snappy saying to summarize how heat moves is "heat moves from hot to cold" or, my favorite, "heat moves from where it <b>is</b> to where it <b>isn't</b>" (relatively speaking).<br /><br />So, why does hot air rise? The cause is density differences. Denser gasses will fall below less dense gasses and "buoy up" or displace the less dense gasses. Density is affected by both atomic weight and configuration of particles. Configuration is affected by temperature. The warmer the temperature the faster the particles move and bang into each other and tend to maintain a sparser configuration making them less dense (collectively).<br /><br />It might be better to think of the more dense gas displacing the less dense gas. The gravitational effect on the more dense gas pulls it down more strongly than it does the less dense gas and the more dense gas pushes its weight around so that the less dense gas has to get out of the way. "Out of the way" is up. (A bit of a fanciful description, but a good deal better than "heat rises".)<br /><br />Bottom line, if you want a shortcut, "hot air rises" is better than "heat rises". It's a small difference, but it is still the difference between true and false.<br /><br /><h3>Cross Multiply and Divide</h3><b>Why it's bad</b>: It does not require the student to understand what they are doing. For some students, this is all they remember about how to solve equations. Any equation. And they don't do it correctly because they don't have a robust understanding of what the statement is intended to convey.<br /><br /><b>How it happens</b>: Proportions are one the critical components of a mathematics education that are invaluable in any interesting career from baking, to agriculture, to trades, to finance, to engineering, to medicine (or how about grocery shopping). Teachers are rightly concerned to turn out students who can at least function at this. It breaks down when it is disconnected from a broader understanding of equations. Students may look at the easy case of proportions as a possible key to unlocking other equations, which has a degree of truth to it. However, it is important to emphasize the reason why cross multiply and divide works (below). Without understanding, there is little reason to expect the simple case to spillover to help in the hard cases.<br /><br /><b>Better</b>: Always do the same thing to both sides of an equation. If both sides are equal, then they are equal to the same number (for a given set of input values for the variables). If I do the same thing to the same number twice, I should get the same result. The result might be expressed differently, but it should still be equivalent.<br /><br />For simple equations, often the best operation to do (to both sides, as always) is the "opposite" of one of the operations shown on one or both sides of the equation. Cross multiply and divide is a specialization of this principle that is only applicable in certain equations which must be recognized. The ability to recognize them comes from having good training on the handling of mathematical expressions (see remarks on BDMAS).<br /><br /><b>Expansion</b>: Provided "do" means "perform an operation", the above is pretty valid. The other type of thing you can "do" is rearrange or re-express one or both sides of an equation such that the sides are still equivalent expressions. Rearrangement does not have to be done to both sides of the equation because it does not change its "value". When operations are performed, they must be applied to each side in its entirety as if each entire side was a single number (or thing). Sometimes parentheses are used around each side of the equation so that you can convey that distributivity applies across the "=" sign, but from there, distributivity (or lack thereof) needs to be determined based on the contents of each side of the equation.<br /><br /><h3>Most Acronyms (but Especially FOIL and BDMAS)</h3><b>Why they're bad</b>: My objection is qualified here (but not for FOIL). An acronym can sometimes summarize effectively, but it is not an explanation and does not lead to understanding. In rare cases, understanding may not be critical for long term proficiency, maybe. But an acronym is a shoddy foundation to build on. If you're trying to make good robots, use acronyms exclusively.<br /><br /><b>How it happens</b>: Acronyms can make early work go easier and faster. This makes the initial teaching appear successful—like a fresh coat of paint on rotten wood. Teacher and student are happy until sometime later when the paint starts to peel. Sometimes after the student has sufficient understanding they may continue to use certain acronyms because of an efficiency gain they get from it, which may lead to perpetuating an emphasis on acronyms.<br /><br /><b>Better</b>: Teach students to understand first. Give the student the acronym as a way for them test if they are on the right track when you're not around. Very sparingly use as a means of prompting them to work a problem out for themselves. (My ideal would be never, but realistically, they need to be reminded of their back up strategy when they get stuck.) Never, ever take the risk of appearing to "prove" the validity of operations you or others have performed by an appeal to an acronym (unless it is a postulate or theorem reference)—that's not just bad math, it is illogical.<br /><br /><b>Expansion</b>: Certain acronyms, if you stoop to use them, can possibly be viewed as training wheels. Maybe BDMAS qualifies. But is there a strategy for losing the training wheels or are the students who use the acronym doomed to a life of having nothing else but training wheels to keep from falling over?<br /><br /><div>So, BDMAS is a basic grammar summary. But you need to become fluent in the use of the language. A good way to get beyond the acronym is to have clear, practical examples of things you might want to calculate that involve several operations. Calculating how much paint you need is a good way to help convey how orders of operations work. Before you calculate the amount of paint you need, you get the surface area, \(s\). The total surface area is a sum of the surface areas of all surfaces I want to paint. If I have the dimensions of a rectangular room, I can get the area of each wall and add them together. To make the example more interesting, we will omit to paint one of the long walls. Because of order of operations giving precedence to multiplication over addition, I have a simple expression for a simple thing:</div><div>$$s = lh + wh + wh = lh + 2wh = h(l + 2w).$$ If you explain how to arrive directly at each expression without using algebra (with reference to simple diagrams), the meaning of each expression can be understood at an intuitive level. Understanding the geometry of the situation gets tied to understanding of the sentences you are making in the language of math. To get the number of cans of paint \(N\), you need coverage, \(c\) in area per can. Then \(N = s/c\). And now, if you didn't already demonstrate how parentheses support the act of substitution in the surface area development, now is a good time, because now you can use substitution for one of the ungainlier expressions for the surface area and get:</div><div>$$N=(lh + 2wh)/c.$$ If you also walk through how to do the calculation in multiple simple steps you can draw the parallels with the steps you would take in calculating using the above formula. I realize substitution showed up much later in the curriculum I received than order of operations but I believe this is a mistake. Even if the student is not expected to use substitution in grade 5, why not let them have an early preview so it doesn't seem like it's from outer space when they need it?</div><div><br /></div><div>Oh, yes, and FOIL. Don't use FOIL outside the kitchen. Better to teach how distributivity applies to multiterm factors which will again be something like a grammar lesson and can incorporate some substitution (e.g., replace \(x + y\) with \(a\) in one of the factors) or "sentence" diagramming, which is beyond the scope of this post.</div><div><br /></div><h3>Using Keywords to Solve Word Problems</h3><b>Why it's bad</b>: It does not require the student to understand what they are reading which masks long-term learning problems, and leads to long-term frustration for the student.<br /><br /><b>How it happens</b>: Students normally want something to do to help them get unstuck. Telling them they have to understand what they are reading isn't the most helpful and giving a bunch of examples of similar expressions and finding ones they already understand seems like a lot of work to go through. Keywords are fast and easy to tell students and are often enough to get stronger students started.<br /><br /><b>Better</b>: Find analogous expressions that are already understandable to the student. If you can find statements that the student already understands at an intuitive level, you may be able to point out the similarity between the statement they are having trouble with and the statements they already can relate to.<br /><br /><b>Expansion</b>: I am not aware of a standard treatment of this issue that meets my full approbation. We use language everyday and we don't use keywords to figure out what people mean by what they are saying. We shouldn't use language any differently with a word problem. It's the same language!<br /><br />The words used in grade 6 word problems are all everyday words. What is needed is the ability to understand and use the same words in some new contexts. Providing a lot of examples is probably the way forward with this. Being able to restate facts in other equivalent ways may be a good indication of understanding and accordingly a good exercise.<br /><br />It's important to recognize that language is complex and takes time to learn. Not everyone will learn it at the same rate and having a breadth and variety of examples with varied complexity is probably necessary for students who struggle more with it. Unfortunately, school doesn't support this kind of custom treatment very well (Cf. "growth" as per Franklin, Real World of Technology).<br /><br /><h3>Conclusion</h3>Explanations, examples, and exercises that lead to genuine understanding are much needed by math students at all levels. I do not believe in the inherent value of making students suffer, figuring everything out for themselves by not giving them the best possible chance of understanding the material with good instruction. But undue opportunities to opt out of understanding are a disservice to them. Training wheels have their place, but we should make every effort to avoid seeming to point to training wheels as any student's long term plan for achieving competency in a subject area.<br /><br />Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-43279663947598208672017-08-07T16:21:00.003-05:002017-09-09T11:44:40.765-05:00A Novice XC Rider Discovering the Trails of BrandonThere are some good trails near Brandon at Brandon Hills that I have been enjoying starting last summer and continuing into this summer. I probably am not cycling regularly enough to achieve significant adaptation to the stress of cycling, but I am enjoying it just the same. (Most weeks I only cycle once and don't do a lot of endurance activity otherwise.)<br /><div><br /></div><div>I am not setting any world records, but it is an interesting skill to develop.<br /><br /></div><!-- TRAILFORKS WIDGET START --> <br /><div class="TrailforksWidgetRidelog" data-h="488px" data-id="1648976" data-map="1" data-stats="1" data-trails="1" data-w="600px"></div>powered by <a href="https://www.trailforks.com/">Trailforks.com</a><!-- TRAILFORKS WIDGET END --><br /><br />Above is my most recent ride up at Brandon Hills where I decided that since the route I was taking only took at most 51 min, I might as well skip the rest stops and just go. I am slowly learning how the bike handles on the trail and how to manage turns and the ups and downs a little more fluidly. I felt like I was getting pretty good. But, reality has come along today to rectify my delusion.<br /><br />I was reminded later that day about a trail on the North Hill in Brandon (Hanbury Bike & Hike Trail / North Hill Loop). It was posted as a trail on trailforks and I thought I should try it out. It is marked as Green/Easy. I did not find it so.<br /><br /><!-- TRAILFORKS WIDGET START --> <br /><div class="TrailforksWidgetRidelog" data-h="474px" data-id="1661509" data-map="1" data-stats="1" data-trails="1" data-w="600px"></div>powered by <a href="https://www.trailforks.com/">Trailforks.com</a><br /><br /><script src="https://es.pinkbike.org/326/sprt/j/trailforks/iframeResizer.min.js" type="application/javascript"></script><script type="text/javascript">var script = document.createElement("script"); script.setAttribute("src", "https://es.pinkbike.org/ttl-86400/sprt/j/trailforks/widget.js"); document.getElementsByTagName("head")[0].appendChild(script); var widgetCheck = false; </script><!-- TRAILFORKS WIDGET END -->I do believe I will have to practice harder to be able to handle this course. I have just today learned the value of a helmet by personal experience. I was misled onto a faux path that led to a bridge to nowhere and my front wheel caught fast in a rut that was deep enough and terminated abruptly enough that the wheel did not bump over the edge. The front wheel decided it would no longer make forward progress and my momentum pulled the bike into rotational motion around the ridge of dirt that stopped the bike from moving forward. I went over the handle bars and ended up with the seat of the bike landing on my helmeted head. Pretty soft landing, considering.<br /><br />I went down the tobogganing hill and didn't fall there, but did catch some air. When I landed, my handle bars did a bit of a jerky left turn which I reflexively turned back straight. Maintaining pedal contact was a bit sketchy too. Certainly a highlight for me.<br /><br />The other tumble was a bit more complicated, but suffice to say that after some failed maneuvering about some rocks I lost control of the bike and tried every trick in the book to avoid a fall but ultimately lay on the ground somewhat removed from my bike but nevertheless unharmed. More of a side dismount, I believe.<br /><br />Later on I got to the smiley face that is visible from 18th Street. I had to do some walking to get there.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-AJqgxut2D1Q/WYjWWvqQhwI/AAAAAAAAEys/Jo4lbmkYNfsSkrrGowEIIeBo4gATky2_ACLcBGAs/s1600/20170807_122028.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="900" data-original-width="1600" height="360" src="https://3.bp.blogspot.com/-AJqgxut2D1Q/WYjWWvqQhwI/AAAAAAAAEys/Jo4lbmkYNfsSkrrGowEIIeBo4gATky2_ACLcBGAs/s640/20170807_122028.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">"Come See Again." ...I'll think about it.</td></tr></tbody></table>There is plenty of interesting terrain on this course and I hope to try again some day. I will likely still have to walk through portions of the trail, but I would like to have a proper completion of the trail without any missed sections. I may want to do a confidence builder or two before I go back.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-78568940450511956082017-01-03T15:45:00.000-06:002017-01-03T15:45:38.200-06:00Trimming the Split Ends of Waveforms<a href="http://darrenirvine.blogspot.ca/2016/12/sounds-like-f.html">In a recent post about making sounds in F#</a> (using Pulse Code Modulation, PCM) I noted that my sound production method was creating popping sounds at the ends of each section of sound. I suggested this was due to sudden changes in phase, which I am persuaded is at least involved. Whether or not it is the fundamental cause, it may be a relevant observation about instrumental sound production.<br /><br />One way to make the consecutive wave forms not have pops between them might well be to carry the phase from one note to the next as it would lessen the sudden change in the sound. Another way, probably simpler, and the method I pursue in this post is to "back off" from the tail—the "split end" of the wave form—and only output full cycles of waves with silence filling in the left over part of a requested duration. My experimentation with it suggests that this approach still results in a percussive sound at the end of notes when played on the speakers. (I suppose that electronic keyboards execute dampening<sup>1</sup> when the keyboardist releases a key to avoid this percussive sound.)<br /><br />The wave forms I was previously producing can be illustrated by Fig. 1. We can reduce the popping by avoiding partial oscillations (Fig. 2.). However, even on the final wave form of a sequence wave forms, there is an apparent percussive sound suggesting that this percussive effect is not (fully) attributable to the sudden start of the next sound. Eliminating this percussive effect would probably involve dampening. Either the dampening would need to be a tail added to the sound beyond the requested duration or a dampening period would need to be built in to the duration requested.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-noSssb1zs6E/WGwLHP_pktI/AAAAAAAAExA/u1JHOPTwGME6BLLOP9toiAWSXgJkYfI9QCLcB/s1600/badsound1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://3.bp.blogspot.com/-noSssb1zs6E/WGwLHP_pktI/AAAAAAAAExA/u1JHOPTwGME6BLLOP9toiAWSXgJkYfI9QCLcB/s1600/badsound1.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 1. A partial oscillation is dropped and a new wave form starts at a phase of 0.0. The "jog" is audible as a "pop".</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-_biz4pmTRk8/WGwMdh1kmnI/AAAAAAAAExM/d1_XXXpoWF4eFyjOndeNz_dIdVrrbCGXgCLcB/s1600/silent%2Bpartials.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-_biz4pmTRk8/WGwMdh1kmnI/AAAAAAAAExM/d1_XXXpoWF4eFyjOndeNz_dIdVrrbCGXgCLcB/s1600/silent%2Bpartials.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 2. "Silent partials" means we don't output any pulse signal for a partial oscillation. The feature is perceivable by the human ear as a slight percussive sound.</td></tr></tbody></table><br />It's worth thinking a bit about how typical "physical" sound production has mechanisms in it which "naturally" prevent the popping that we have to carefully avoid in "artificial" sound production.<br /><ul><li>In a wind instrument, you can't have a sudden enough change in air pressure or sudden enough change in the vibration of the reed or instrument body to create pops.</li><li>In a stringed instrument, alteration of the frequency of the vibration of the string will maintain the phase of the vibration from one frequency to the next. </li><ul><li>The wave form is not "interrupted" by anything you can do to the string. There are no truly "sudden" changes you can make to the vibration of the string. </li><li>Any dampening you can do is gradual in hearing terms. </li><li>A "hammer-on" a guitar string does not suddenly move the position of the string with respect to its vibration period—phase is maintained.</li></ul><li>In a piano, dampening does not create sufficiently sudden changes in the wave form to create a pop.</li></ul><div>In short, (non-digital) instrumental sound production avoids "pops" by physical constraints that produce the effects of dampening and/or phase continuity. Digital sound production is not inherently constrained by mechanisms that enforce these effects.<br /><br />I have altered the sinewave function to fill the last section of the sound with silence, following the pattern suggested by Fig. 2. This does leave an apparent percussive effect, but is a slight improvement in sound.<br /><br /><textarea cols="90" rows="25" wrap="off"> // duration is in seconds let sinewave volume freq_hertz sample_rate duration = let TWOPI = 2.0 * System.Math.PI // how long is a single step? let timestep = 1.0 / (float sample_rate) // how many total steps will fit? let steps = int(floor(duration / timestep)) // number of full cycles in duration let fullcycles = int(floor(duration * freq_hertz)) // time for one cycle let cycletime = 1.0 / freq_hertz // time for all of the full cycles let fullcyclestime = cycletime * (float fullcycles) let fullCycleSteps = int(floor(fullcyclestime / timestep)) let silentSteps = steps - fullCycleSteps // amplitude ranges between negative and positive (shifted down by half volume) let amp = volume / 2.0 let theta = freq_hertz * TWOPI * timestep seq { // full oscillations for i = 1 to fullCycleSteps do yield amp * sin(theta * float(i)) // fill remainder with silence for i = 1 to silentSteps do yield 0.0 } </textarea><br /><br />Something this experiment does not deal with is whether we are hearing an effect of the wave form, per se, or whether we are hearing the behavior of the speakers that are outputting the sound. A sudden stop of something physical within the speaker might generate an <i>actual</i> percussive sound. Also still outstanding is whether the human ear can perceive phase discontinuity in the absence of an amplitude discontinuity.<br /><br /><sup>1</sup> Dampening is a (progressive) reduction in amplitude over several consecutive oscillations of a wave.</div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-30667895984253909352016-12-30T19:04:00.001-06:002017-01-05T10:09:01.085-06:00Packing Spheres into a CubeIn this post I compare three different alignment strategies for putting spheres into a cube. Our main variable will be \(n\), the ratio of cube side to sphere diameter. The question is, how many spheres can we get into the cube? We denote the number of spheres as \(S\).<br /><div class="separator" style="clear: both; text-align: center;"></div><div><br /><h3>Matrix Alignment</h3><div>The simplest alignment of spheres to think about is a matrix alignment, or row, column, layer. If \(n=10\), then we can fit 1000 spheres into the cube this way. Or, in general, \(S=n^3\).</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-3T41QL47hU8/WGa02ybQvvI/AAAAAAAAEtc/7gQAApH69VEtVcldORfwLa7OApQVwSVyQCLcB/s1600/10x10x10%2Bspheres.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="305" src="https://1.bp.blogspot.com/-3T41QL47hU8/WGa02ybQvvI/AAAAAAAAEtc/7gQAApH69VEtVcldORfwLa7OApQVwSVyQCLcB/s320/10x10x10%2Bspheres.PNG" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 1. Matrix alignment.</td></tr></tbody></table><div>One way to summarize this alignment is that each sphere belongs to its own circumscribing cube and none of these circumscribing cubes overlap each other. For small values of \(n\), this arrangement is optimal.</div><div><br /></div><h3>Square Base Pyramid Alignment</h3><div>An improvement to \(S\), for larger values of \(n\) is had by making the alignment of subsequent layers offset. Start with a layer in a square arrangement (\(n\) by \(n\)). If you look down on this first layer, it's awfully tempting to nestle the next layer of spheres in the depressions between the spheres. We will make better use of the interior space, although we won't make as good a use of the space near the edges. But if we can get enough extra layers in to make up for the losses near the outside, that would carry the day.</div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-iYs1LzIv3GI/WGa4iCFQtYI/AAAAAAAAEto/5uudiW1GZuYGgnwGYw1_M32NJj_fr9t-QCLcB/s1600/10x10%2Btop%2Bview.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="400" src="https://4.bp.blogspot.com/-iYs1LzIv3GI/WGa4iCFQtYI/AAAAAAAAEto/5uudiW1GZuYGgnwGYw1_M32NJj_fr9t-QCLcB/s400/10x10%2Btop%2Bview.PNG" width="388" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 2. A few red dots show where we're thinking of putting the next layer of spheres. We won't fit as many on this layer, but the second layer isn't as far removed from the first.</td></tr></tbody></table><div>In the matrix alignment, consecutive layers were all a distance of 1 unit between layers (center to center). With this current alignment, we need to know the distance between layers. We need to consider a group of four spheres as a base and another sphere nestled on top and find the vertical distance from the center of the four base spheres to the center of the top sphere. (Implicitly, the flat surface the four base spheres are resting on constitute <b>horizontal</b>. <b>Vertical</b> is perpendicular to that.)</div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-IMLDx2cO7y4/WGa9jKa8DiI/AAAAAAAAEt4/HfywVq7XW6YuMs1SupCVSZT3jOzSIOvHgCLcB/s1600/square%2Bbase.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-IMLDx2cO7y4/WGa9jKa8DiI/AAAAAAAAEt4/HfywVq7XW6YuMs1SupCVSZT3jOzSIOvHgCLcB/s1600/square%2Bbase.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 3. Top view of a square base pyramid packing arrangement.</td></tr></tbody></table><div>We can see from Fig. 3., that line segment ab is of length \(\sqrt{2}\). We take a section parallel to this line segment, shown in Fig. 4.</div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-0Yd0KZMtGiU/WGa-ON0NXuI/AAAAAAAAEuA/WtYpdLzitc8bJtTG4ONAo-vQ0pKkO0b5wCLcB/s1600/section%2BA-A.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-0Yd0KZMtGiU/WGa-ON0NXuI/AAAAAAAAEuA/WtYpdLzitc8bJtTG4ONAo-vQ0pKkO0b5wCLcB/s1600/section%2BA-A.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 4. We recognize a 45°-45°-90° triangle, abc.</td></tr></tbody></table><div>We can calculate the vertical distance between layers, line segment cd, as \(\sqrt{2}/2 \approx 0.7071\dots\). Note that the spheres in this second layer are in basically the same configuration as the bottom layer. We can see this will be the case because the top sphere in Fig. 3., has its outer circumference pass across the point where the lower spheres meet. So adjacent spheres in the second layer will be in contact. And, further, second layer spheres will follow the rectangular pattern of depressions in the bottom layer which are spaced apart the same as the centers of the spheres in the bottom layer. So we will end up with the same organization of spheres in the second layer as the bottom layer, just one fewer row and one fewer column.<br />We can now calculate the number of spheres that will fit in a cube using this alignment as</div><div>$$S = n^2 \lambda_1 + (n-1)^2 \lambda_2,$$</div><div>where \(\lambda_1\) is the number of full layers and \(\lambda_2\) is the number of "nestled" layers. (The third layer, which is indeed "nestled" into the second layer is also returned to the configuration of the first layer and so I refer to it as a full layer instead. The cycle repeats.) To find \(\lambda_1\) and \(\lambda_2\), we first find the total layers, \(\lambda\) as</div><div>$$\lambda = \left \lfloor{\frac{n-1}{\sqrt{2}/2}}\right \rfloor+1$$</div><div>The rationale for this formula is: find how many spaces between layers there are and compensate for the difference between layers and spaces. Imagine a layer at the very bottom and a layer at the very top (regardless of configuration). The layer at the bottom, we get for free and is the 1 term in the formula. (The top layer may or may not be tight with the top when we're done but hold it up there for now.) The distance from the center of the bottom layer to the center of the top layer is \(n-1\). The number of times we can fit our spacing interval into this number (completely), is the number of additional layers we can fit in. (Additional to the bottom layer, the top layer is the last of the layers we just found room for in the \(n-1\), or, technically, \(n-1/2\), er, whatever, do some examples.)</div><div>The break down between \(\lambda_1\) and \(\lambda_2\) will not be exactly even, with \(\lambda_1\) having one more if \(\lambda\) is odd. We can show this neatly using the ceiling operator </div><div>$$\lambda_1 = \left \lceil{\lambda/2}\right \rceil$$</div><div>$$\lambda_2 = \lambda - \lambda_1$$</div><div>As early as \(n=4\) we get \(S=66\), which is better than the matrix arrangement by 2 spheres. I modeled this to make sure I didn't let an off by one error sneak into my formula (see Fig. 5.).</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-AKtN-yPx94I/WGbP8TTLd2I/AAAAAAAAEuc/EYHuKp0ZWA8wohS7s0J7RzGFmey4z_jJQCEw/s1600/4x4x4.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="576" src="https://4.bp.blogspot.com/-AKtN-yPx94I/WGbP8TTLd2I/AAAAAAAAEuc/EYHuKp0ZWA8wohS7s0J7RzGFmey4z_jJQCEw/s640/4x4x4.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td class="tr-caption" style="font-size: 12.8px;">Fig. 5. Somewhat surprisingly, rearrangement of spheres (relative to matrix alignment) produces gains in volume coverage even at \(n=4\).</td></tr></tbody></table></td></tr></tbody></table><div>Here it is in Maxima:<br /><br /></div><div><textarea cols="90" rows="5" wrap="off">SquareBasePacking(n) := block([lambda_1, lambda_2, lambda], lambda: floor((n-1)/(sqrt(2)/2))+1, lambda_1: ceiling(lambda/2), lambda_2: lambda - lambda_1, n^2 * lambda_1 + (n-1)^2 * lambda_2); </textarea><br /><br /></div><h3>Tetrahedral Alignment</h3><div>Another rearrangement is similar to a honeycomb. A honeycomb is an arrangement of hexagons that meet perfectly at their edges. Pictures of this are readily available online. This <a href="https://en.wikipedia.org/wiki/Honeycomb">Wikipedia article</a> suffices. Imagine putting a circle or a sphere in each of these honeycomb cells. The will produce a single layer which is more compact, although it will push the distance between layers apart. </div></div><div><br /></div><div>So, what do we need to know to make a formula for this one?</div><div><ol><li>Find the distance between successive layers.</li><li>Formula for the number of layers.</li><li>Formula for the number of spheres in each layer.</li><ol><li>This depends on knowing the range of layer configurations (we will only have two different layer configurations).</li></ol><li>Some assembly required.</li></ol><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div>This is the same procedure we followed already for the square based pyramid alignment, but we are being more explicit about the steps.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-HO_3ImtIHWQ/WGbefzfktvI/AAAAAAAAEus/hrkyf_nA_dAmjoRlM0hxv9D5_GGdMLJ9gCLcB/s1600/tetrahedal%2Bpacking.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="260" src="https://2.bp.blogspot.com/-HO_3ImtIHWQ/WGbefzfktvI/AAAAAAAAEus/hrkyf_nA_dAmjoRlM0hxv9D5_GGdMLJ9gCLcB/s320/tetrahedal%2Bpacking.PNG" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 6. We need to find the distance of line segment ad.</td></tr></tbody></table>We recognize we have an equilateral triangle and so we can identify a 30°-60°-90° triangle on each half of altitude ab. We have drawn all of the angle bisectors for this triangle which meet at a common point (commonly known property for triangles). We can recognize/calculate the length of bd as \(\frac{1}{2\sqrt{3}}\). We can then calculate the length of ad as \(\frac{\sqrt{3}}{2} - \frac{1}{2\sqrt{3}} = 1/\sqrt{3}\). From the Pythagorean theorem, we have a length for cd of \(\sqrt{2/3}\).<br />The number of spheres in the first layer will require us to use the distance for ab. We now need to find the number of rows in the bottom layer. We have<br />$$L_1=n \rho_1 + (n-1) \rho_2,$$<br />where \(\rho_1\) and \(\rho_2\) are the number of each kind of row in the layer and \(L_1\) is the number of spheres in the first layer. Since rows are spaced by \(\sqrt{3}/2\), we have a number of rows, \(\rho\), of<br />$$\rho = \left \lfloor{\frac{n-1}{\sqrt{3}/2}}\right \rfloor+1$$<br />and \(\rho_1\) and \(\rho_2\) can be broken out similar to our \(\lambda\)'s earlier.<br />$$\rho_1 = \left \lceil{\frac{\rho}{2}}\right \rceil$$<br />$$\rho_2 = \rho - \rho_1.$$<br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-tGMppyuqgPI/WGbgTSPoXHI/AAAAAAAAEu4/AdUwYTl38OYQBuZ2fgZgLy05LwPYOSe9wCLcB/s1600/section%2BB-B.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><br /><img border="0" height="400" src="https://1.bp.blogspot.com/-tGMppyuqgPI/WGbgTSPoXHI/AAAAAAAAEu4/AdUwYTl38OYQBuZ2fgZgLy05LwPYOSe9wCLcB/s400/section%2BB-B.PNG" width="373" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 7. We're looking for the distance of line segment cd which can be found using ac and ad.</td></tr></tbody></table>Now, there is a little matter of how nicely (or not) the second layer will behave for us. It is noteworthy that we are not able to put a sphere into every depression. We skip entire rows of depressions. (The Wikipedia article on sphere packing referred to these locations as the "sixth sphere".) These skipped locations may play mind games on you as you imagine going to the third layer. Nevertheless, the third layer can be made identical to the first, which is what I do here.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-Gg3WvYGIyLw/WGbtozeWs6I/AAAAAAAAEvI/ZIdEcQjCa0syLUf4z8fhHR6s1wxAYD9bACLcB/s1600/trouble.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-Gg3WvYGIyLw/WGbtozeWs6I/AAAAAAAAEvI/ZIdEcQjCa0syLUf4z8fhHR6s1wxAYD9bACLcB/s1600/trouble.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 8. The first layer is blue. The second layer is red. Are we going to have trouble with you Mr. Hedral?</td></tr></tbody></table>There's a couple of things that happen on the second layer. First, instead of starting with a row of size \(n\), we start with a row of size \(n-1\). The second issue is that we may not get as many total rows in. We could do the same formula for rows again but now accounting for the fact that we have lost an additional \(\frac{1}{2\sqrt{3}}\) before we start. However, we can reduce this to a question of "do we lose a row or not?" The total distance covered by the rows, \(R\), for the bottom layer is<br />$$R = (\rho-1) \frac{\sqrt{3}}{2} + 1$$<br />If \(n-R\ge\frac{1}{2\sqrt{3}}\), then we have the same number of total rows. Otherwise, we have one less row. We let \(r_L\) represent the number of rows lost in the second layer, which will either be 0 or 1. Noting that the order of the rows is now \(n-1\) followed by \(n\), we can give an expression for the number of spheres, \(L_2\), in our second layer now.<br />$$L_2 = (n-1)(\rho_1 - r_L) + n\rho_2$$<br />We have a very similar formula for the total number of spheres in the box.<br />$$S = L_1\lambda_1 + L_2\lambda_2,$$<br />where<br />$$\lambda_1 = \left \lceil{\frac{\lambda}{2}}\right \rceil$$<br />$$\lambda_2 = \lambda - \lambda_1,$$<br />where<br />$$\lambda = \left \lfloor{\frac{n-1}{\sqrt{2/3}}}\right \rfloor+1.$$<br />My Maxima function for this is<br /><textarea cols="90" rows="13" wrap="off">TetrahedralPacking(n) := block( [lambda_1, lambda_2, lambda, rho_1, rho_2, rho, r_L, L_1, L_2, R], rho: floor((n-1)/(sqrt(3)/2))+1, rho_1: ceiling(rho/2), rho_2: rho - rho_1, L_1: n*rho_1 + (n-1)*rho_2, R: (rho-1) * sqrt(3)/2 + 1, r_L: if n - R >= 1/(2*sqrt(3)) then 0 else 1, L_2: (n-1)*(rho_1 - r_L) + n*rho_2, lambda: floor((n-1)/sqrt(2/3))+1, lambda_1: ceiling(lambda/2), lambda_2: lambda - lambda_1, L_1 * lambda_1 + L_2 * lambda_2);</textarea><br /><br /><h3>Comparison Between Methods</h3>It isn't too hard to tell that one of the two second approaches is going to produce better results than the first after some value of \(n\) (Fig. 9.). What is more surprising is that our latter two alignments stay neck and neck up into very high values of \(n\). If we divide them by the volume of the containing cube, both appear to be a round about way to arrive at the square root of 2! (Fig. 10.)<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-JlaqNfa1NPI/WGb8jxA6KCI/AAAAAAAAEvY/5pnnP4ggxfsKLR0BBfB7ggLFiShUEuu0ACLcB/s1600/overtake.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-JlaqNfa1NPI/WGb8jxA6KCI/AAAAAAAAEvY/5pnnP4ggxfsKLR0BBfB7ggLFiShUEuu0ACLcB/s1600/overtake.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 9. Both of the latter approaches give much better packing densities than the matrix alignment.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-g0mKFGMpOJs/WGgQ5GnffwI/AAAAAAAAEwk/nsutDvE6Fss5hStHGhv4DiDCONPnP4cdgCLcB/s1600/revised2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-g0mKFGMpOJs/WGgQ5GnffwI/AAAAAAAAEwk/nsutDvE6Fss5hStHGhv4DiDCONPnP4cdgCLcB/s1600/revised2.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 10. Square base packing in blue, tetrahedral packing in red, both divided by total volume (# of spheres/unit volume). Unclear if there is a distinct winner and we seem to be getting closer to \(\sqrt{2}\).</td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"></div>Let's see which one wins more often for positive integer values of \(n\) in Maxima.</div></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-FPQ1aYY3-Po/WGgPzoUrkII/AAAAAAAAEwc/FOO5rlNG1GcDA0bi_alzGCskCgVvBx06ACLcB/s1600/revised.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-FPQ1aYY3-Po/WGgPzoUrkII/AAAAAAAAEwc/FOO5rlNG1GcDA0bi_alzGCskCgVvBx06ACLcB/s1600/revised.PNG" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div>So, there's no run away winner, but if you're a betting man, bet on square base pyramid packing out of the available choices in this post. Regardless, it appears that both of these packing arrangements approach optimal packing (see <a href="https://en.wikipedia.org/wiki/Sphere_packing">Sphere packing</a>). My density calculation (allowing the \(\sqrt{2}\) conjecture) for volume coverage comes out to</div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-vlqh721V-O4/WGcB4DEK_cI/AAAAAAAAEwA/knWdNtF7H8UViDaZt93vWNq-nie9SC-VwCLcB/s1600/density.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-vlqh721V-O4/WGcB4DEK_cI/AAAAAAAAEwA/knWdNtF7H8UViDaZt93vWNq-nie9SC-VwCLcB/s1600/density.PNG" /></a></div><div>Gauss, old boy, seems to have approved of this number.</div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-86947473596755907022016-12-27T17:27:00.001-06:002017-01-03T15:53:00.248-06:00Sounds Like F#<a href="https://darrenirvine.blogspot.ca/2016/12/the-sound-of-f.html">Last time</a>, I thought that I was going to have to do some special research to account for multiple sources of sound. Reading this <a href="http://hep.physics.indiana.edu/~rickv/Multiple_sources.html">short physics snippet</a> I can see that multiple sources is really as simplistic as my initial thinking was inclined to. You can ignore all of the decibel stuff and focus on the statements about amplitude, because, of course, I am already working in terms of amplitude. What can I say, I let the mass of voices on the internet trouble my certainty unduly? Sort of.<br /><br />There's two important issues to consider:<br /><ol><li>Offsetting of the signals.</li><li>Psychological issues regarding the perception of sound.</li></ol>Regarding number 2, here's a quick hit I found: <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/Sound/loud.html">http://hyperphysics.phy-astr.gsu.edu/hbase/Sound/loud.html</a>. It seems credible, but I'm not sufficiently interested to pursue it further. And so, my dear Perception, I salute you in passing, Good day, Sir!<br /><br />Offsetting of the signals is something that is implicitly accounted for by naively adding the amplitudes, but there is a question of what the sineChord() function is trying to achieve. Do I want:<br /><ol><li>The result of the signals beginning at exactly the same time (the piano player hits all his notes exactly[?] at the same time—lucky dog, guitar player not as much[?]. Hmm...)?</li><li>The average case of sounds which may or may not be in phase?</li></ol><div>Regarding case 2, <a href="https://www.quora.com/Sound-How-do-several-people-shouting-together-make-a-louder-noise-than-one-person-on-their-own">Loring Chien suggests in a comment on Quora</a> that the average case would be a 41% increase in volume for identical sounds not necessarily in phase. At first glance, I'm inclined to think it is a plausible statement. Seeming to confirm this suggestion is the summary about <a href="http://www.sengpielaudio.com/calculator-amplitude.htm">Adding amplitudes (and levels) found here</a>, which also explains where the the 41% increase comes from, namely, from \(\sqrt{2}\approx 1.414\dots\). Good enough for me.</div><div><br /></div><div>All such being said, I will pass it by as I am approaching this from a mechanical perspective. I'm neither interested in the psychological side of things, nor am I interested in getting average case volumes. I prefer the waves to interact freely. If would be neat, for example, to be able to manipulate the offset on purpose and get different volumes. The naive approach is little more subject to experimentation.</div><div>I would like to go at least one step further though and produce a sequence of notes and chords and play them without concern of fuzz or pops caused by cutting the tops of the amplitudes off. We need two pieces: a) a limiter and b) a means of producing sequences of sounds in a musically helpful way.</div><div>A limiter is pretty straight-forward and is the stuff of basic queries. I have the advantage of being able to determine the greatest amplitude prior to playing any of the sound. The function limiter() finds the largest absolute amplitude and scales the sound to fit.</div><div><br /></div><textarea cols="90" rows="10" wrap="off"> let limiter volume wave = let amp = volume / 2.0 let max = wave |> Seq.map (fun u -> abs(u)) |> Seq.max if max <= amp then wave else let compression = amp / (float max) Seq.map (fun u -> u*compression) wave </textarea><br />Here is an example usage:<br /><br /><textarea cols="80" rows="6" wrap="off">let ChordAOpen = seq[69; 76; 81; 85; 88] notes.sineChord 65535.0 ChordAOpen sample_rate 1.0 |> notes.limiter 65535.0 |> Seq.map (fun u -> int(u)) |> notes.PlayWaveAsync sample_rate </textarea><br /><div>One concern here is that we may make some sounds that we care about too quiet by this approach. To address such concerns we would need a compressor that affects sounds from the bottom and raises them as well. The general idea of a sound compressor is to take sounds within a certain amplitude range and compress them into a smaller range. So, sounds outside your range (above or below) get eliminated and sounds within your range get squeezed into a tighter dynamic (volume) range. This might be worth exploring later, but I do not intend to go there in this post.</div><br />Last post I had a function called <span style="font-family: "courier new" , "courier" , monospace;">sineChord()</span>, which I realize could be generalized. Any instrument (or group of identical instruments) could be combined using a single chord function that would take as a parameter a function that converts a frequency into a specific instrument sound. This would apply to any instruments where we are considering the notes of the chord as starting at exactly the same time (guitar would need to be handled differently). So, instead of <span style="font-family: "courier new" , "courier" , monospace;">sineChord()</span>, let's define <span style="font-family: "courier new" , "courier" , monospace;">instrumentChord()</span><span style="font-family: inherit;">:</span><br /><span style="font-family: inherit;"><br /></span><textarea cols="90" rows="11" wrap="off"> // expects: // volume (2 x amplitude) // frequency (Hertz) // sample_rate (Hertz) // duration (seconds) let instrumentChord (instrument: (float->float->int->float->seq<float>)) (volume:float) (sample_rate:int) (duration:float) notes = notes |> Seq.map frequency |> Seq.map (fun f -> instrument volume f sample_rate duration) |> Seq.reduce (fun a b -> Seq.map2 (fun sa sb -> sa + sb) a b) </textarea><br /><br />Next we will create a simple class to put a sequence of notes/chords and durations into which we can then play our sounds from.<br /><br /><h3>Reference Chords and Values</h3>Here are a few reference chord shapes and duration values. I define the chord shape and the caller supplies the root sound to get an actual chord. The durations are relative to the whole note and the chord shapes are semi-tone counts for several guitar chord shapes. The root of the shapes is set to 0 so that by adding the midi-note value to all items you end up with a chord using that midi note as the root. This is the job of chord().<br /><br /><textarea cols="90" rows="25" wrap="off"> let m1st = 1.0 // whole note let m3_4 = 0.75 // dotted-half let m2nd = 0.5 // half let m4th = 0.25 // quarter let m8th = 0.125 // eighth let m16th = 0.0625 // sixteenth let m32th = 0.03125 // thirty-secondth let EShape = seq[0; 7; 12; 16; 19; 24] let AShape = seq[0; 7; 12; 16; 19] let DShape = seq[0; 7; 12; 16] let DAShape = seq[-5; 0; 7; 12; 16] let CShape = seq[0; 4; 7; 12; 16] let CGShape = seq[-5; 0; 4; 7; 12; 16] let GShape = seq[0; 4; 7; 12; 16; 24] let Gno3Shape = seq[0; 7; 12; 19; 24] let EmShape = seq[0; 7; 12; 15; 19; 24] let AmShape = seq[0; 7; 12; 15; 19] let DmShape = seq[0; 7; 12; 15] let DmAShape = seq[-5; 0; 7; 12; 15] let chord root shape = shape |> Seq.map (fun u -> root + u) </textarea><br /><br />Next, InstrumentSequence is an accumulator object that knows how to output a wave form and play it. For simplicity, it is limited to the classic equal temperment western scale. Note that it takes a function has the task of turning a frequency into a wave form, which means that by creating a new function to generate more interesting wave forms, we can have them played by the same accumulator. If we only want it to produce the wave form and aggregate the wave forms into some other composition, we can access the wave form through the routine WaveForm.<br /><br /><textarea cols="90" rows="35" wrap="off"> type InstrumentSequence () = let chords = new ResizeArray<seq<int>>() let metrics = new ResizeArray<float>() member this.AddNote note duration = chords.Add(Seq.singleton note) metrics.Add(duration) member this.AddChord notes duration = chords.Add(notes) metrics.Add(duration) member this.Chords = chords member this.WaveForm instrument volume tempo sample_rate = let whole_note_duration = 240.0 / (float tempo) let runs = Seq.map2 (fun chord metric -> let duration = metric * whole_note_duration instrumentChord instrument volume sample_rate duration chord) chords metrics seq { for r in runs do yield! r} member this.PlayInstrument instrument volume tempo sample_rate = this.WaveForm instrument volume tempo sample_rate |> limiter volume |> Seq.map int |> PlayWave sample_rate member this.PlayInstrumentAsync instrument volume tempo sample_rate = this.WaveForm instrument volume tempo sample_rate |> limiter volume |> Seq.map int |> PlayWaveAsync sample_rate </textarea><br /><br />A sample use of the class which follows a very basic E, A, B chord progression follows:<br /><br /><textarea cols="90" rows="14" wrap="off">let song = notes.InstrumentSequence() // E major song.AddChord (notes.chord 40 notes.EShape) notes.m2nd // A major song.AddChord (notes.chord 45 notes.AShape) notes.m4th // B major bar chord song.AddChord (notes.chord 47 notes.DShape) notes.m4th song.AddChord (notes.chord 40 notes.EShape) notes.m2nd song.AddChord (notes.chord 45 notes.AShape) notes.m4th song.AddChord (notes.chord 47 notes.DShape) notes.m4th song.AddChord (notes.chord 40 notes.EShape) notes.m2nd song.PlayInstrumentAsync notes.sinewave 65535.0 92 44100 </textarea><br /><br />To get the same progression with minors, use the EmShape, etc., values. But, at the end of the day, we've still got sine waves, lots of them. And, it's mostly true that,<br /><br /> boring + boring = boring.<br /><br />I will not attempt the proof. You can make some more examples to see if you can generate a counter-example.<br /><br />There is also some unpleasant popping that I would like to understand a bit better—for now I'm going to guess it relates to sudden changes in the phases of the waves at the terminations of each section of sound. Not sure what the solution is but I'll guess that some kind of dampening of the amplitude at the end of a wave would help reduce this.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-35118073252811968622016-12-24T17:48:00.002-06:002016-12-27T21:53:24.184-06:00The Sound of F#I'm doing a bit or reading and dabbling the in the world of sound with the hope that it will help me to tackle an interesting future project. One of the first steps to a successful project is to know more about what the project will actually entail. Right now I'm in exploratory mode. It's a bit like doing drills in sports or practicing simple songs from a book that are not really super interesting to you except for helping you learn the instrument you want to learn. I'm thinking about learning to "play" the signal processor, but it seemed like a good start to learn to produce signals first. And sound signals are what interest me at the moment.<br /><br />Sound is pretty complicated, but we'll start with one of the most basic of sounds and what it takes to make them in Windows, writing in F#. Our goal in this post is to play sounds asynchronously in F# Interactive.<br /><br />The first iteration of this comes from Shawn Kovac's response to his own question found here at <a href="http://stackoverflow.com/questions/19672593/generate-morse-code-or-any-audio-in-net-c-or-vb-net-without-3rd-party-depe">Stack Overflow</a>. Below is my port to F#. I have used the <b>use</b> keyword to let F# handle the Disposable interfaces on the stream and writer to take care of all the closing issues. It's important to note that the memory stream must remain open while the sound is playing, which means this routine is stuck as synchronous.<br /><br /><textarea cols="90" rows="50" wrap="off"> // ported from Shawn Kovac's response to his own question at: // http://stackoverflow.com/questions/19672593/generate-morse-code-or-any-audio-in-net-c-or-vb-net-without-3rd-party-depe let PlaySound frequency msDuration volume = let TAU = 2.0 * System.Math.PI let formatChunkSize = 16 let headerSize = 8 let formatType = int16(1) let tracks = int16(1) let samplesPerSecond = 44100 let bitsPerSample = int16(16) let frameSize = int16(tracks * ((bitsPerSample + int16(7)) / int16(8))) let bytesPerSecond = samplesPerSecond * int(frameSize) let waveSize = 4 let samples = (samplesPerSecond * msDuration) / 1000 let dataChunkSize = samples * int(frameSize) let fileSize = waveSize + headerSize + formatChunkSize + headerSize + dataChunkSize // var encoding = new System.Text.UTF8Encoding(); use mStrm = new MemoryStream() use writer = new BinaryWriter(mStrm) writer.Write(0x46464952) // = encoding.GetBytes("RIFF") writer.Write(fileSize) writer.Write(0x45564157) // = encoding.GetBytes("WAVE") writer.Write(0x20746D66) // = encoding.GetBytes("fmt ") writer.Write(formatChunkSize) writer.Write(formatType) writer.Write(tracks) writer.Write(samplesPerSecond) writer.Write(bytesPerSecond) writer.Write(frameSize) writer.Write(bitsPerSample) writer.Write(0x61746164) // = encoding.GetBytes("data") writer.Write(dataChunkSize) let theta = frequency * TAU / (float samplesPerSecond) // 'volume' is UInt16 with range 0 thru Uint16.MaxValue ( = 65 535) // we need 'amp' to have the range of 0 thru Int16.MaxValue ( = 32 767) let amp = volume >>> 2 // so we simply set amp = volume / 2 for step = 0 to samples-1 do let s = int16(float(amp) * sin(theta * (float step))) writer.Write(s) // set ourselves up at the beginning of the file mStrm.Seek(int64(0), SeekOrigin.Begin) |> ignore // we use a player object which requires its memory stream to exist in order to play use player = new System.Media.SoundPlayer(mStrm) player.PlaySync() </textarea><br /><br />To get an asynchronous sound we can shove the PlaySound function into a background thread and let it go. This looks like the following:<br /><br /><textarea cols="90" rows="4" wrap="off"> let PlaySoundAsync frequency msDuration volume = let soundThread = Thread((fun () -> PlaySound frequency msDuration volume)) soundThread.IsBackground = true |> ignore soundThread.Start() </textarea><br /><br />A limitation of the PlaySound() approach given above is that it limits your method of sound production. The details of the sound produced are buried inside the function that plays the sound. I don't want to have a bunch of separate routines: one that plays sine waves, one that plays saw tooth, one that plays chords with two notes, one that plays chords with three notes, one that include an experimental distortion guitar—whatever. The sound player shouldn't care about the details, it should just play the sound, whatever it is. (Not a criticism against Kovac, he had a narrower purpose than I do; his choice was valid for his purpose.)<br /><br />I want to decouple the details of the sound to be produced from the formatting and playing of the sound. Here we have the basic formatting and playing of the sound packaged up that will take any wave form sequence we give it (sequence of integer values):<br /><br /><textarea cols="90" rows="60" wrap="off"> // suggested sample_rate = 44100 let PlayWave sample_rate wave = let TWOPI = 2.0 * System.Math.PI let formatChunkSize = 16 let formatType = int16(1) let tracks = int16(1) let bitsPerSample = int16(16) let frameSize = int16(tracks * ((bitsPerSample + int16(7)) / int16(8))) let bytesPerSecond = sample_rate * int(frameSize) use mStrm = new MemoryStream() use writer = new BinaryWriter(mStrm) // we write the initial byte of each item of data in the comment to the right // this is so we can come back later and write to two of these locations writer.Write(0x46464952) // = encoding.GetBytes("RIFF"): 0 writer.Write(0) // filesize needs to be calculated and filled in at byte 4 writer.Write(0x45564157) // = encoding.GetBytes("WAVE"): 8 writer.Write(0x20746D66) // = encoding.GetBytes("fmt "): 12 writer.Write(formatChunkSize) //: 16 writer.Write(formatType) //: 20 writer.Write(tracks) //: 22 writer.Write(sample_rate) //: 24 writer.Write(bytesPerSecond) //: 28 writer.Write(frameSize) //: 32 writer.Write(bitsPerSample) //: 34 writer.Write(0x61746164) // = encoding.GetBytes("data"): 36 writer.Write(0) // dataChunkSize needs to be calculated and filled in: 40 let mutable samples = 0 wave |> Seq.iter (fun (s:int) -> writer.Write((int16 s)) samples <- samples + 1) // fill in the datachunk mStrm.Seek(int64(40), SeekOrigin.Begin) |> ignore let dataChunkSize = samples * int(frameSize) writer.Write(dataChunkSize) // fill in the filesize let filesize = 4 + (8 + formatChunkSize) + (8 + dataChunkSize) mStrm.Seek(int64(4), SeekOrigin.Begin) |> ignore writer.Write(filesize) // set ourselves up at the beginning of the file to play it mStrm.Seek(int64(0), SeekOrigin.Begin) |> ignore // we use a player object which requires its memory stream to exist in order to play use player = new System.Media.SoundPlayer(mStrm) player.PlaySync() let PlayWaveAsync sample_rate wave = let soundThread = Thread((fun () -> PlayWave sample_rate wave)) soundThread.IsBackground <- true soundThread.Start() </textarea><br /><br />I haven't decoupled this as far as we could. If we wanted to handle multiple tracks, for example, you would need to take this abstraction a little further.<br /><br />Now, we also need some functions for generating wave forms. Sequences are probably a good choice for this because you will not need to put them into memory until you are ready to write them to the memory stream in the PlayWave function. What exactly happens to sequence elements after they get iterated over for the purpose of writing to the MemoryStream, I'm not quite clear on. I know memory is allocated by MemoryStream, but whether the sequence elements continue to have an independent life for some time after they are iterated over, I'm not sure about. The ideal thing, of course, would be for them to come into existence at the moment they are to be copied into the memory stream (which they will, due to lazy loading of sequences) and then fade immediately into the bit bucket—forgotten the moment after they came into being.<br /><br />Here are a few routines that relate to making chords that are not without an important caveat: the volume control is a bit tricky.<br /><br /><textarea cols="90" rows="25" wrap="off"> let ChordEOpen = seq[64; 71; 76; 80; 83; 88] let ChordAOpen = seq[69; 76; 81; 85; 88] // duration is in seconds let sinewave volume freq_hertz sample_rate duration = let TWOPI = 2.0 * System.Math.PI let timestep = 1.0 / (float sample_rate) let steps = int(floor(duration / timestep)) let theta = freq_hertz * TWOPI * timestep seq { for i = 1 to steps do yield volume * sin(theta * float(i)) } let frequency note = 440.0 * exp(float(note - 69)/12.0 * log(2.0)) // produce sine wave form for several different notes // volume is the volume of each individual note; // it is up to the caller to deal with the possible need for compression on the resultant signal // or to keep the volume under control from the start let sineChord volume notes sample_rate duration = notes |> Seq.map frequency |> Seq.map (fun f -> sinewave volume f sample_rate duration) |> Seq.reduce (fun a b -> Seq.map2 (fun sa sb -> sa + sb) a b) </textarea><br /><br />Here is a sample script to run in F# Interactive:<br /><br /><textarea cols="80" rows="4" wrap="off">let sample_rate = 44100 notes.sineChord (65535.0/6.0) (seq [69; 76; 81; 85; 88; 93]) sample_rate 1.0 |> Seq.map (fun u -> int(u)) |> notes.PlayWaveAsync sample_rate </textarea><br /><br />The above plays a version of an A chord (5th fret E-<b>form</b> bar chord on the guitar).<br /><br />Note that I have divided the volume by the number of notes in the chord. If we don't watch the volume like this and we let it get away on us, it will change the sound fairly dramatically. At worst something like the "snow" from the old analog TVs when you weren't getting a signal, but if you just go a little over, you will get a few pops like slightly degraded vinyl recordings.<br /><br />We could include volume control in our wave form production routines. This wouldn't necessarily violate our decoupling ideals. The main concern I have is that I want the volume indicated to represent the volume of one note played and the other notes played should cause some increase in volume. I want the dynamic fidelity of the wave forms to be preserved with different sounds joining together. Even after that, you may want to run a limiter over the data to make sure the net volume of several sounds put together doesn't take you over your limit.<br /><br />However, I need to do more research to achieve these ideals. For now, I will break it off there.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-75803152720304914132016-12-23T20:14:00.001-06:002016-12-23T20:14:52.513-06:00Unicode in LispWorks<h3>Background</h3>I have a really basic vocabulary quiz program that I have been dabbling with, partly as a means of learning to use LispWorks and partly as a way to get a little different way of reviewing my vocabulary in my personal study of NT Greek. I obtained the basic vocabulary from Mounce's <a href="https://billmounce.com/flashworks">Flashworks</a> program. There are some text files for vocabulary for a few different languages, including Greek and Hebrew. These files can be read as UTF-8 Unicode. <br /><br /><div>Unicode support in Common Lisp is non-uniform, owing to the fact that the language standard was established before Unicode arrived [<a href="http://www.gigamonkeys.com/book/numbers-characters-and-strings.html#characters">Seibel</a>]. So, what works in one implementation may not work in another. I have found this in working with LispWorks and Closure Common Lisp. Since I wanted something easy, not something clever, I have not attempted to bridge these. I just want the "easy" answers for the basic problems of reading and writing Unicode in LispWorks.<br /><h3>Reading and Writing Unicode</h3></div><div>If I want to read a basic tab delimited Unicode file, line by line, I use <span style="font-family: "courier new" , "courier" , monospace;">get-text-data</span> (below). <span style="font-family: "courier new" , "courier" , monospace;">keepers</span> allows me to take a subset of the columns in the file<br /><br /></div><textarea cols="87" edit="off" readonly="" rows="25" wrap="off">(defun get-text-data (filename &optional keepers) (with-open-file (stream filename :external-format :utf-8) (loop for ln = (special-read stream) then (read-line stream nil) while ln collect (filter-columns (cl-utilities:split-sequence #\TAB ln) keepers)))) (defun special-read (stream) (let ((ln (read-line stream nil))) (when (and ln (> (length ln) 0) (eq (aref ln 0) #\Zero-Width-No-Break-Space)) (setf ln (subseq ln 1))) ln)) ;; list can be any non-empty list ;; keepers should be an array and columns should be specified by zero-based indexes ;; no indices in keepers should exceed the number of items in the list ;; the order of the indices in keepers is the order in which they will be returned in ;; i.e., the order of values can be changed this way (defun filter-columns (list keepers) (if keepers (loop with arr = (concatenate 'vector list) for i from 0 to (1- (length keepers)) collect (aref arr (aref keepers i))) list))</textarea><br /><div><br />The first awful thing you will notice here is the <span style="font-family: "courier new" , "courier" , monospace;">special-read</span> function. This is not always necessary, but I did have a case where I had a leading Byte-Order-Mark (BOM: 0xFEFF) that I needed remove from the file. This, somewhat oddly, but understandably, is called <span style="font-family: "courier new" , "courier" , monospace;">#\Zero-Width-No-Break-Space</span> in LispWorks [<a href="https://en.wikipedia.org/wiki/Byte_order_mark#Usage">WikiPedia</a>]. If memory serves, putting the BOM in (which I did on purpose at one point) made reading the file work without specifying a format to read in. But the BOM is optional for utf-8.<br /><br />Writing out to a file is quite straightforward, but the question is whether it will be possible to read it in correctly. The answer is that it depends on how you will read it in later. If you are only going to read it into the same program or into programs that will assume UTF-8, then there's no issue. But if you want to be able to read it in without thinking about whether the source file is Unicode or not, you can deliberately output the BOM so that it will read successfully in LispWorks even if you did not specify the <span style="font-family: "courier new" , "courier" , monospace;">:external-format</span>. Below is a sample of doing just this.<br /><br /></div><textarea cols="87" readonly="" rows="7" wrap="off">(defmethod write-to-file ((cc vocabulary-card-collection) filename) (with-open-file (stream filename :direction :output :if-exists :supersede :external-format '(:utf-8 :eol-style :crlf)) (write-char #\Zero-Width-No-Break-Space stream) (print (collection->plist cc) stream)))</textarea><br /><br />The function <span style="font-family: "courier new" , "courier" , monospace;">colleciton->plist</span> is my own function which turns my object into a plist which can be read by the built-in function <span style="font-family: "courier new" , "courier" , monospace;">read</span>. (This is my way of textifying CLOS objects to write them to file—the details are not relevant to my topic.)<br /><br />Now, I can read in the plist that I wrote out, without specifying the :external-format, as in:<br /><br /><textarea cols="87" readonly="" rows="2" wrap="off">(with-open-file (stream "d:/documents/blah2.txt") (read stream)) </textarea><br /><br />However, if I didn't manually output the BOM, I would need to write:<br /><br /><textarea cols="87" readonly="" rows="2" wrap="off">(with-open-file (stream "d:/documents/blah2.txt" :external-format :utf-8) (read stream)) </textarea><br /><br />The long and the short of it is that you can use :external-format all the time and use a consistent format for files that pertain to your program and that's probably the best practice. If you want to be sure another LispWorks program is going to be OK with your UTF-8 file, whether it is consistently using that format or not, putting a BOM at the start for good measure may be a good idea. On the other hand, maybe the reader will choke on the BOM because it doesn't filter it out when reading a file it already knows is utf-8.<br /><br />So, I didn't solve the problem, but if you're head butting a format issue in Unicode, maybe this says enough about it that you can find something to do with the particular data you can't seem to read right now—because, I feel your pain.<br /><br />For more information on IO for Unicode in LispWorks, see the manual, especially <a href="file:///C:/Program%20Files%20(x86)/LispWorks/lib/7-0-0-0/manual/online/LW/html/lw-196.htm">here</a>.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-4296981739435789152016-06-01T00:57:00.001-05:002016-06-02T21:26:17.408-05:00Tax Brackets Are Not So BadThere's a common misconception that many seem to have about the way taxes work. People overthink themselves about "moving into a higher tax bracket." They think that if they just barely go over their current tax bracket into the next, the tax man will get extra hungry and they will wind up making less money than if they, for example, didn't go in for that little bit of overtime. These concerns are, in a word, unwarranted. That's not to say that anecdotal evidence in support of this idea does not abound. I'm sure you can find anecdotal evidence for anything that's popular to complain about.<br /><br />In 2015, the following tax brackets applied to income taxes:<br /><ul><li><=$44,701, 15%</li><li>$44,701—$89,401, 22%</li><li>$89,401—$138,586, 26%</li><li>>$138,586, 29%.</li></ul><div>So, if I make $44,701 in taxable income, I pay $6705.15 in taxes on it. Suppose I make $44,702. Here's where the confusion becomes apparent. Some would try to tell you that you now pay 22% on the entire amount, or $9834.44, which would be a complaint worthy problem for someone teetering on the border of a tax bracket. (If a government actually tried to do it this way, the public outrage would produce headline news, not anecdotal evidence.) I call this false idea "folk" tax brackets.</div><div><br /></div><div>What do I actually pay on $44,702? I pay \(15\% \times $44,701 + 22\% \times $1 = $6705.37\) in income tax.</div><div><br /></div><div>The folk thinking taxation produces a jagged net income graph. This is the sort of thing people are implying when they speak of "making less" if they work overtime. You'd think they thought the take-home pay graph looked like this:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-zEY-qaWjtOA/V05ozO0czvI/AAAAAAAAEqA/9PdOrdrjMu4qevaiAHl89xpkTg4fHKEcwCLcB/s1600/naivetaxbracket.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://3.bp.blogspot.com/-zEY-qaWjtOA/V05ozO0czvI/AAAAAAAAEqA/9PdOrdrjMu4qevaiAHl89xpkTg4fHKEcwCLcB/s1600/naivetaxbracket.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 1. Marginal taxes don't cause the take-home pay graph to look anything like this, but some talk as if it did.</td></tr></tbody></table><div>The real graph is not jagged. It isn't smooth either (if you look close enough). It is continuous, unlike the above graph, but it does produce corners at the locations of the tax bracket cross-overs. These are not really that scary looking:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-PYmU7UCo9_4/V05qXq-XGpI/AAAAAAAAEqM/kA3meGnh3hsh9rqES_kM6aRiKzNCyISDwCLcB/s1600/realtaxes.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-PYmU7UCo9_4/V05qXq-XGpI/AAAAAAAAEqM/kA3meGnh3hsh9rqES_kM6aRiKzNCyISDwCLcB/s1600/realtaxes.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 2. Marginal taxes only increase the rate of tax "marginally". The increase in tax rate applies to the marginal amounts ("increments"), not the entire amounts. Basically you have a line with some very slight elbows in it where there is a small change in direction.</td></tr></tbody></table><div>So, like you, I wouldn't mind if the whole graph was a little steeper (increased take-home pay percentage, less taxes), but it is what it is. And what it is is relatively fair, in concept at least.<br /><br /></div><h3>Modeling Taxes in Maxima</h3><div>Now, where did I get these wonderful graphs? I produced them in Maxima, of course. Here's the Maxima code to define a marginal tax formula, if given the brackets that apply:</div><div><br /></div><textarea cols="90" rows="15" wrap="off">MarginalTaxFormula(income_variable, percentList, thresholds) := block([rangeNumbers, ranges, i, u], rangeNumbers: append([0],thresholds,[inf]), ranges: makelist([percentList[i], rangeNumbers[i], rangeNumbers[i+1]],i,1,length(percentList)), lreduce("+", map(lambda([u], [p, mn, mx]: u, charfun(mn < income_variable and income_variable < mx) * (income - mn) * p + charfun(income_variable >= mx) * (mx - mn) * p), ranges))); FederalIncomeTax2015: [[0.15, 0.22, 0.26, 0.29], [44701, 89401, 138586]]; ManitobaIncomeTax2015: [[0.1080, 0.1275, 0.1740], [31000, 67000]]; marginalFederalTax(income) := ''(MarginalTaxFormula(income, FederalIncomeTax2015[1], FederalIncomeTax2015[2])); marginalManitobaTax(income) := ''(MarginalTaxFormula(income, ManitobaIncomeTax2015[1], ManitobaIncomeTax2015[2])); marginalIncomeTax(income) := marginalFederalTax(income) + marginalManitobaTax(income); afterTax(income) := income - marginalTax(income); </textarea><br /><br />You can see the parts that you would need to change to produce your own formula for your marginal tax situation. FederalIncomeTax2015 is a list containing two lists. The first of these is the list of tax percentages. The next is the list of corresponding threshold values (0 is implicit as the beginning and the last percentage is assumed to apply to anything after the largest threshold amount).<br /><br />Noteworthy items in the Maxima code above:<br /><br /><div><ol><li>The characteristic function (charfun) produces the same effect as an if statement in the code. The characteristic function yields 1 if the condition is true, else 0. So, if an amount is not in the relevant range, it will not contribute. I have two different conditions to check for, but rather than branching, I just keep them in parallel. The advantage is simplicity. The disadvantage is that, in theory, there is more computation happening than really needs to happen. The simplicity carries a lot of weight in my book.</li><li>I append 0 and infinity to the list of boundaries for the tax brackets. Since Maxima recognizes inf and knows that any-number is less than inf, I have no case-taking to deal with the initial or final tax brackets.</li><li>map. When map will do the job, nothing does it more succinctly. Thanks map!</li><li>'' (quote-quote or double single-quote). Oh, boy. The basic meaning is, "evaluate what comes next and use the result of the evaluation instead of what's here." What this amounts to in this code is that the code that produces the function (the MarginalTaxFormula) is not re-run every time the function marginalFederalTax(income) is called with a different value of income. The MarginalTaxFormula(...) code is invoked by the '' operator and this returns the function which MarginalTaxFormual(...) produces. This is necessary because I am defining the function using the := operator which suspends evaluation until the function is invoked. Thus, you end up with a definition like this:</li></ol><div><div><span style="font-family: "courier new" , "courier" , monospace;">marginalFederalTax(income):=12788.1*charfun(income>=138586)+9834.0*charfun(income>=89401)+6705.15*charfun(income>=44701)+0.15*charfun(0<income and="" font="" income=""></income></span></div><div><span style="font-family: "courier new" , "courier" , monospace;">0.22*charfun(44701<income and="" charfun="" font="" income-138586="" income-44701="" income-89401="" income=""></income></span></div></div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-21478697671736054612015-12-21T10:04:00.001-06:002016-01-02T19:30:34.239-06:00Maxima in Common Lisp via ASDFI have previously used Lisp code that ran in Maxima. This is fairly straightforward if you know Common Lisp. Basic instructions can be found in Chapter 37.1 of the Maxima help files (look for Contents link at the top of any of the help pages). But, how about the other way around—Maxima from Lisp side?<br /><br />I recently looked into running Maxima inside Common Lisp using ASDF and found <a href="http://sourceforge.net/p/maxima/mailman/maxima-discuss/thread/slrnn06132.4lc.robert.dodier%40freekbox.fglan/#msg34485091">this</a>. That gave me hope that it was possible and a little insight into what it was trying to do. I have sometimes wanted to create a user interface in some other environment (such as LispWorks) and call upon Maxima to do some computations for me.<br /><br />To get yourself set up to access Maxima from Common Lisp you need:<br /><ol><li>The <a href="http://sourceforge.net/projects/maxima/files/">source</a>, including the maxima.asd file.</li><li>Some idea where to put stuff and, perhaps, a few tweaks to the maxima.asd file.</li><li>Some file search variable set up to allow Maxima load statements in your common lisp code so you can access functionality found in the share packages or your own packages.</li></ol><div><h3>Tweaks</h3></div><div>I put the source under my common-lisp directory so that a simple call to <span style="font-family: Courier New, Courier, monospace;">(asdf:load-system :maxima)</span> would work.<br /><br />I found that I needed to edit the maxima.asd file. I don't want to tell ASDF where to put the result or interfere with its "normal" operation. "ASDF, you're in charge!" is basically how I look at it. Accordingly, I commented out the <span style="font-family: Courier New, Courier, monospace;">output-files :around</span> method. I suppose you could change the <span style="font-family: Courier New, Courier, monospace;">*binary-output-dir*</span> here as well, if you would like to—perhaps to suit whichever Lisp you are compiling to.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-vW0KAYyc1pk/VngVZPjBl-I/AAAAAAAAEg4/xrXMJ73_uD4/s1600/commented%2Bout.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-vW0KAYyc1pk/VngVZPjBl-I/AAAAAAAAEg4/xrXMJ73_uD4/s1600/commented%2Bout.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 1. I'm pretty sure I don't need this code for my purposes, so I have it commented out.</td></tr></tbody></table></div><div><h3>Accessing Shared Packages</h3></div>You might not realize how much stuff is in the share packages if you have been using the wxMaxima interface only. It appears that a number of widely used share packages are automatically loaded in wxMaxima, but maxima.asd only loads the core components. This is not a serious impediment—you can determine which packages to load from the Maxima help files.<br /><br />But before you can use the Maxima load command, you need to set the file_search_maxima and file_search_lisp Maxima variables. I combined a call to <span style="font-family: Courier New, Courier, monospace;">asdf:load-system</span> with the setting of these variables in a file under my common-lisp directory. I called this file load-max.lisp and it looks a bit like this (note how hairy it gets if you scroll right):<br /><br /><textarea cols="80" rows="16" wrap="off">(asdf:load-system :maxima) ;; edit the paths in these lists to suit your own file system layout (setf maxima::$file_search_maxima '((MLIST SIMP) "C:/Users/Darren/maxima/$$$.{mac,mc}" "c:/path/common-lisp/maxima-5.37.3/share/$$$.{mac,mc}" "c:/path/common-lisp/maxima-5.37.3/share/{affine,algebra,algebra/charsets,algebra/solver,amatrix,bernstein,calculus,cobyla,cobyla/ex,cobyla/lisp,colnew,colnew/lisp,combinatorics,contrib,contrib/Eulix,contrib/Grobner,contrib/Zeilberger,contrib/alt-display,contrib/altsimp,contrib/binsplit,contrib/bitwise,contrib/boolsimp,contrib/coma,contrib/diffequations,contrib/diffequations/tests,contrib/elliptic_curves,contrib/elliptic_curves/figures,contrib/format,contrib/fresnel,contrib/gentran,contrib/gentran/man,contrib/gentran/test,contrib/gf,contrib/integration,contrib/levin,contrib/lurkmathml,contrib/maxima-odesolve,contrib/maximaMathML,contrib/mcclim,contrib/namespaces,contrib/noninteractive,contrib/odes,contrib/prim,contrib/rand,contrib/rkf45,contrib/sarag,contrib/smath,contrib/state,contrib/trigtools,contrib/unit,contrib/vector3d,descriptive,diff_form,diff_form/tests,diffequations,distrib,draw,dynamics,ezunits,finance,fourier_elim,fractals,graphs,hypergeometric,integequations,integer_sequence,integration,lapack,lapack/blas,lapack/lapack,lbfgs,linearalgebra,logic,lsquares,macro,matrix,minpack,minpack/lisp,misc,mnewton,multiadditive,numeric,numericalio,orthopoly,pdiff,physics,simplex,simplex/Tests,simplification,solve_rat_ineq,solve_rec,sound,stats,stringproc,sym,tensor,to_poly_solve,trigonometry,utils,vector,z_transform}/$$$.{mac,mc}" "c:/path-to-my-own/Useful Maxima Functions/$$$.{mac,mc}") maxima::$file_search_lisp `((MLIST SIMP) "C:/Users/Darren/maxima/$$$.{fasl,lisp,lsp}" "c:/path/common-lisp/maxima-5.37.3/share/$$$.{fasl,lisp,lsp}" "c:/path/common-lisp/maxima-5.37.3/share/{affine,algebra,algebra/charsets,algebra/solver,amatrix,bernstein,calculus,cobyla,cobyla/ex,cobyla/lisp,colnew,colnew/lisp,combinatorics,contrib,contrib/Eulix,contrib/Grobner,contrib/Zeilberger,contrib/alt-display,contrib/altsimp,contrib/binsplit,contrib/bitwise,contrib/boolsimp,contrib/coma,contrib/diffequations,contrib/diffequations/tests,contrib/elliptic_curves,contrib/elliptic_curves/figures,contrib/format,contrib/fresnel,contrib/gentran,contrib/gentran/man,contrib/gentran/test,contrib/gf,contrib/integration,contrib/levin,contrib/lurkmathml,contrib/maxima-odesolve,contrib/maximaMathML,contrib/mcclim,contrib/namespaces,contrib/noninteractive,contrib/odes,contrib/prim,contrib/rand,contrib/rkf45,contrib/sarag,contrib/smath,contrib/state,contrib/trigtools,contrib/unit,contrib/vector3d,descriptive,diff_form,diff_form/tests,diffequations,distrib,draw,dynamics,ezunits,finance,fourier_elim,fractals,graphs,hypergeometric,integequations,integer_sequence,integration,lapack,lapack/blas,lapack/lapack,lbfgs,linearalgebra,logic,lsquares,macro,matrix,minpack,minpack/lisp,misc,mnewton,multiadditive,numeric,numericalio,orthopoly,pdiff,physics,simplex,simplex/Tests,simplification,solve_rat_ineq,solve_rec,sound,stats,stringproc,sym,tensor,to_poly_solve,trigonometry,utils,vector,z_transform}/$$$.{fasl,lisp,lsp}" "c:/path/common-lisp/maxima-5.37.3/src/$$$.{fasl,lisp,lsp}" "c:/path-to-my-own/Useful Maxima Functions/$$$.{o,lisp,lsp}"))</textarea><br /><br />So, if I want to use Maxima in my Lisp session, I just need to call <span style="font-family: Courier New, Courier, monospace;">(load "~/common-lisp/load-max.lisp")</span>. I can move to the Maxima package by calling <span style="font-family: Courier New, Courier, monospace;">(in-package :maxima)</span>.<br /><br />Try out the diff function:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">MAXIMA> ($diff #$x^2$ #$x$)</span><br /><span style="font-family: Courier New, Courier, monospace;">((MTIMES SIMP) 2 $X)</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: inherit;">Note the syntax for referring to the diff function (</span><span style="font-family: Courier New, Courier, monospace;">$diff</span><span style="font-family: inherit;">) and the syntax for an entire maxima expression (</span><span style="font-family: Courier New, Courier, monospace;">#$expression$</span><span style="font-family: inherit;">). There's also some lower to uppercase switching to watch out for, but see chapter 37 of the Maxima help documentation for more about this.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;">Suppose I want to use something in a share package or one of my own custom Maxima packages. I can use the Maxima version of </span><span style="font-family: Courier New, Courier, monospace;">load</span><span style="font-family: inherit;">, namely, </span><span style="font-family: Courier New, Courier, monospace;">$load</span><span style="font-family: inherit;">.</span><br /><span style="font-family: inherit;"><br /></span><span style="font-family: Courier New, Courier, monospace;">MAXIMA> ($load "stringproc")</span><br /><span style="font-family: Courier New, Courier, monospace;"><deleted></deleted></span><br /><span style="font-family: Courier New, Courier, monospace;">MAXIMA> ($parse_string "x^2")</span><br /><span style="font-family: Courier New, Courier, monospace;">((MEXPT) $X 2)</span><br /><br />One more example, to show how to deal with Maxima-specific features that are beyond the scope of Common Lisp. In general, use the <span style="font-family: Courier New, Courier, monospace;">meval </span>function as in:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">(setf n (* 3 4 5 6 7))</span><br /><span style="font-family: Courier New, Courier, monospace;">(maxima::meval `((maxima::$divisors) ,n))</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">((MAXIMA::$SET MAXIMA::SIMP) 1 2 3 4 5 6 7 8 9 10 12 14 15 18 20 21 24 28 30 35 36 40 42 45 56 60 63 70 72 84 90 105 120 126 140 168 180 210 252 280 315 360 420 504 630 840 1260 2520)</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: inherit;">(If you're interested, see Maxima-discuss archives. <a href="http://sourceforge.net/p/maxima/mailman/maxima-discuss/thread/slrnn8bmfr.2ke.robert.dodier%40freekbox.fglan/">HT </a></span><span style="background-color: white; color: #555555; font-family: monospace, sans-serif; font-size: 13px; line-height: 18px; white-space: pre-wrap;"><a href="http://sourceforge.net/p/maxima/mailman/maxima-discuss/thread/slrnn8bmfr.2ke.robert.dodier%40freekbox.fglan/">Robert Dodier</a>.</span><span style="font-family: inherit;">)</span><br /><ol></ol>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-78238237160847828302015-09-19T16:57:00.000-05:002015-09-20T09:19:53.501-05:00Circle Fitting in MaximaIn previous posts, I've considered the problem of <a href="http://darrenirvine.blogspot.com/2012/02/best-fit-circle-find-center-using-excel.html">fitting a circle to 2D data points</a> and made <a href="http://darrenirvine.blogspot.com/2014/04/best-fit-circle-3d-considerations.html">some remarks about 3D cases</a>. Fitting problems such as this require:<br /><ol><li>An equation form to fit to. </li><li>A function to quantify the error, which is to be minimized.</li></ol><div>For a circle, the form of equation we want to fit to is<br />\[r^2=(x-a)^2 + (y-b)^2.\]<br />Maxima's lsquares package removes the burden of item number 2 for us. But for interest's sake, the error function we want to minimize can be given as</div><div>\[\mathrm{r}\left( a,b\right) =\frac{\sum_{i=1}^{n}\sqrt{{\left( {x}_{i}-a\right) }^{2}+{\left( {y}_{i}-b\right) }^{2}}}{n}\]</div><div>\[\mathrm{SSE}\left( a,b\right) =\sum_{i=1}^{n}{\left( \mathrm{r}\left( a,b\right) -\sqrt{{\left( {x}_{i}-a\right) }^{2}+{\left( {y}_{i}-b\right) }^{2}}\right) }^{2}.\]</div><div>Recalling that \(\sum_{i=1}^{n}(\bar{x}-x_i)^2 = \sum_{i=1}^{n}{x_i}^2 - (\sum_{i=1}^{n}{x_i})/n\), we see that<br />\[\mathrm{SSE}\left( a,b\right) =\sum_{i=1}^{n}\left({\left( {x}_{i}-a\right) }^{2}+{\left( {y}_{i}-b\right) }^{2}\right)-\frac{{\left(\sum_{i=1}^{n}\sqrt{{\left({x}_{i}-a\right) }^{2}+{\left( {y}_{i}-b\right) }^{2}}\right)}^{2}}{n}.\]</div><div><h3><b>lsquares_estimates()</b></h3>Given a set of points in matrix form (m), we need only to use the equation of a circle and indicate which are the data point variables and which are the "solve for" variables. In our problem,<br /><br /><span style="font-family: Courier New, Courier, monospace;">lsquares_estimates(m, [x,y], (x-a)^2 + (y-b)^2 = r^2, [a,b,r]);</span><br /><br />will suffice. The following function can be used to check the SSE.<br /><br /><span style="font-family: Courier New, Courier, monospace;">sse(a, b, r, pts) := sum((r-sqrt((pts[i][1]-a)^2+(pts[i][2]-b)^2))^2,i,1,length(pts));</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><br /><h3>Testing for Reasonableness</h3>It is reasonable to ask whether least squares (or other data fitting methods) will reliably produce good results with a particular quality of data for a particular application. This is true for any problem. What if the math works fine, but the likelihood of being able to "math filter" the likely errors in the data to reach an accurate answer is not so high? In other words, how much inaccuracy in the collected data can you tolerate and still have a reasonable expectation of getting to the right answer? I'm not going to attempt to deal with probabilities (per se) or confidence intervals, but look at a simulation approach that may be able to give you an idea of the reasonableness of the circle fit based on an (intuitively) estimated amount of random error in your data.<br /><br /><h3>Simulation of Circular Arc Data</h3>We will consider the problem of data collected, using a total station, of a circular arc, with an "as-built" scenario particularly in view. As such, auto-generated sample points should simulate the expected behaviors of a rod/prism person stepping out regular sample locations from start to end of an ostensibly circular arc. This can be implemented as random normal perturbations of a circular arc.<br /><br />One of the first pieces to a realistic perturbed circle is to be able to apply randomization that follows a normal distribution given an expected value and a standard deviation. Maxima has a function that behaves this way. <span style="font-family: Courier New, Courier, monospace;">random_normal()</span> can be called with an expected value and standard deviation and optionally with a number of values to generate. Here is a sample use and output of the function, pictured using the histogram function:<br /><br /><span style="font-family: Courier New, Courier, monospace;">load(distrib)$</span><br /><span style="font-family: Courier New, Courier, monospace;">load(draw)$</span><br /><div><div><span style="font-family: Courier New, Courier, monospace;">histogram(</span></div><div><span style="font-family: Courier New, Courier, monospace;"> random_normal(50.0, 1.0, 1000),</span></div><div><span style="font-family: Courier New, Courier, monospace;"> nclasses = 10,</span></div><div><span style="font-family: Courier New, Courier, monospace;"> xrange = [40,60]);</span></div></div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-aDLOFg2eu9U/VLHp77Wmp1I/AAAAAAAAEUY/7fCkyx5HJX0/s1600/histogram.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-aDLOFg2eu9U/VLHp77Wmp1I/AAAAAAAAEUY/7fCkyx5HJX0/s1600/histogram.PNG" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 1. Normally distributed data</td></tr></tbody></table><div>The standard deviation should be chosen such that roughly two-thirds of the time (≈68%), the sample value will be within one standard deviation of the mean; that is, on the interval \((\bar{x}-\sigma, \bar{x}+\sigma).\) </div><div><br /></div><div>Where the radius is concerned, we have the relatively predictable variations of the prism person's actions and the relatively unpredictable variations of the ostensible circular arc, which could turn out to not be an arc at all. In the absence of any sort of meaningful solution to this latter variation, we could possibly just bump up the standard deviation. But I'm not clear on whether this is a meaningful thing to do in assessing the acceptability of the data collection precision. The error should be "small" if the thing we measured conforms well to a circle. In order to check for whether this is "small" you need to produce a standard deviation from the SSE: \[\sigma_{SSE} = \sqrt{\frac{SSE}{n-1}}\] In the scenario we're dealing with here, someone would probably draw the computed arc in a CAD program and compare it to the measured data points and in an "arm-waving" sort of way decide whether it looks good or not.</div><div><br /></div>Where the point sampling distances are concerned, we will assume that the rod person can step out distances (or angle sweeps) within some percentage of that intended. We will also make the pragmatic assumption that the prism person has decided a definite number of samples per given arc and will try to make them evenly spaced. The prism person will try to subdivide the remaining space at each sample point, and two thirds of the time, they will be within some percentage of the distance (or angle sweep) they intended.<br /><br />Here is the perturbedArc() function with a sample usage:<br /><br /><textarea cols="80" rows="18" wrap="off">perturbedArc(angle, center, radius, probable_radius_error, sample_spacing_variability, number_points) := block( [initAngle, termAngle, radii, expectedSweep, actualSweep, angles, remainingAngle, i], initAngle: random(float(2*%pi)), termAngle: float(initAngle + angle), remainingAngle: float(angle), angles: [initAngle, termAngle], radii: random_normal(radius, probable_radius_error, number_points), for i: (number_points-1) thru 2 step -1 do ( expectedSweep: remainingAngle/i, actualSweep: random_normal(expectedSweep, sample_spacing_variability*expectedSweep), remainingAngle: remainingAngle - actualSweep, angles: cons(termAngle - remainingAngle, angles) ), angles: sort(angles), map(lambda([r,theta],bfloat(center+r*[cos(theta),sin(theta)])), radii, angles) )$ pts: perturbedArc(1.5*%pi, [1000,1000], 3000, 12, 0.05, 20)$ wxplot2d([discrete,pts], [style,points], [same_xy,true]);</textarea><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-F_0tYVhnHx0/Vf3NavCOnDI/AAAAAAAAEdo/Uz6hvXzNPfY/s1600/circular%2Barc%2Bdata.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-F_0tYVhnHx0/Vf3NavCOnDI/AAAAAAAAEdo/Uz6hvXzNPfY/s1600/circular%2Barc%2Bdata.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 2 - This is a perturbed circular arc. Doesn't look very perturbed does it? If we bumped up the standard deviation of the radius enough, we'd get something more clearly perturbed.</td></tr></tbody></table>To produce the matrix form of the data I use a simple function I created (see <a href="http://darrenirvine.blogspot.com/2015/01/get-your-text-data-into-maxima.html">here</a>) called <span style="font-family: Courier New, Courier, monospace;">MakeMatrix()</span>. Beyond that, it is a simple as<br /><br /><span style="font-family: Courier New, Courier, monospace;">m: MakeMatrix(pts)$</span><br /><span style="font-family: Courier New, Courier, monospace;">lsquares_estimates(m, [x,y], (x-a)^2 + (y-b)^2 = r^2, [a,b,r]);</span><br /><span style="font-family: Courier New, Courier, monospace;">subst(%[1],sse(a,b,abs(r),pts));</span></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-28347965517972639662015-09-07T14:46:00.003-05:002015-09-07T15:05:55.861-05:00Plotting a LWPOLYLINE in MaximaIn this post, I want to show how to produce a representation of LWPOLYLINEs in Maxima that is suitable for plotting to the <span style="font-family: Courier New, Courier, monospace;">plot2d</span> function. The direct usefulness of such an operation is dubious. This is more in the way of a worked example to help build understanding of a few mathematical and computing tools.<br /><br /><span style="font-size: large;"><b>Recap</b></span><br />The LWPOLYLINE is a 2d representation of polylines that represents each vertex as having x-value, y-value, and bulge value. The bulge value indicates the path of travel from the current point to the next one. Either it will go straight (bulge = 0), veer to the left and back as a CW arc (bulge < 0), or it will veer to the right and back as a CCW arc (bulge > 0).<br /><br />So, it will<br /><ul><li>bulge left (< 0),</li><li>go straight (= 0), or</li><li>bulge right (> 0).</li></ul><div>More details on the mathematics of the bulge factor can be found <a href="http://darrenirvine.blogspot.com/2015/08/polylines-radius-bulge-turnaround.html">here</a> and <a href="http://darrenirvine.blogspot.com/2014/09/bulge-value-code-42-in-lwpolyline.html">here</a>.</div><div><br /></div><span style="font-size: large;"><b>The Pieces</b></span><br /><div>First of all, we define our problem and break its solution up into pieces. A polyline is of such a nature that is does not make a good "\(f(x)\)" type function (plotting \(y\) versus \(x\)). However, it lends itself to parametric expression well enough. So, we will have an \(x(s)\) function and a \(y(s)\) function. Ideally, I would like to specify a length along the polyline (measured from the start) and get a point \((x(s), y(s))\). (If you have a surveying background, think \(s = \) station number.)</div><div><br /></div><div>So, I need to have parametric equations for lines and for arcs that are parameterized for arc length and initial value of the parameter. I then need to stitch these together to make both an \(x\) and a \(y\) piecewise function.</div><div><br /></div><div><b>Linear</b></div><div><br /></div><div>Parametric equations for lines involve a direction and a starting point. Simply normalize the direction vector to parameterize for length. This looks like</div><div>\[(x(s), y(s)) = A + \frac{(B-A)}{||B-A||}s,\]</div><div>where \(A\) is the starting point and \(B\) is the final point. We will need to adjust the initial value to account for all of the length of the polyline prior to the initial point on this segment, say, \(s_i\) which changes our equations to</div><div>\[(x(s), y(s)) = A + \frac{(B-A)}{||B-A||}(s-s_i).\]</div><div><br /></div><div><b>Arc</b></div><div><br /></div><div>Parametric equations for arcs use a center, radius, initial angle, direction of rotation, and total angle subtended (some variation of feasible input parameters admitted). Parameterizing for arc length isn't as obvious as it was for lines, so let's start with a basic circle:</div><div>\[x(s) = c_x + r \cos{s}\]</div><div>\[y(s) = c_y + r \sin{s},\]</div><div>where \((c_x, c_y)\) is the center and \(r\) is the radius of the circle. For our reparameterization to work we require that radian measure be used (or else require degrees and do the conversion in the function, but why make work for ourselves and introduce inefficiency?). Recall that the definition of radians is the ratio of arc length to radius (\(\theta = s/r\)). So, our parameter becomes \(s/r\) and so we have</div><br /><div>\[x(s) = c_x + r \cos{\left(\frac{s}{r}\right)}\]</div><div>\[y(s) = c_y + r \sin{\left(\frac{s}{r}\right)}.\]</div><br /><div>To get the right starting point for our function we need the parameter inside the trig functions to be equal to the correct initial angle when the input value of \(s_i\) (length of polyline so up to the start of the arc) is given. We need, then, the initial angle, say \(\theta_i\), plus the angle past initial that the given \(s\) corresponds to. Thus,</div><br /><div>\[x(s) = c_x + r \cos{\left(\theta_i + \sigma \frac{s-s_i}{r}\right)}\]</div><div><div>\[y(s) = c_y + r \sin{\left(\theta_i + \sigma \frac{s-s_i}{r}\right)},\]</div></div>where \(\sigma\) is 1 or -1 as per the sign of the bulge factor.<br /><div><br /></div><div><b>Piecewise</b></div><div><br /></div><div>Making a piecewise function parameterized for length requires that we keep track of the length of the polyline up to the segment we are about to address. Implementing it in Maxima also requires a bit of acquaintance with the methods available to us for doing so. It is possible to use if-then-else logic to achieve this and obtain something that works fine for plotting purposes. As an example of this, consider a really simple example:</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">f(x) := if x < 1 then x^2 else x^3;</span></div><div><span style="font-family: Courier New, Courier, monospace;">plot2d(f(x),[x,-1,2]);</span></div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-8HbdXBTzdZ0/VeuvqQAB-7I/AAAAAAAAEcw/TRrJIw3AQ_U/s1600/plot.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-8HbdXBTzdZ0/VeuvqQAB-7I/AAAAAAAAEcw/TRrJIw3AQ_U/s1600/plot.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig 1. If-then-else logic in functions plots fine.</td></tr></tbody></table><div>There's another, somewhat handier approach to this problem, namely, the characteristic function. The characteristic function function is a way of expressing conditions as multiplication by 1 or 0 in order to include and exclude the appropriate expression. <span style="font-family: Courier New, Courier, monospace;">charfun2(x,a,b)</span> returns 1 if \(x\in [a,b)\), else 0. So, we can modify the above function to be as follows and obtain the same result. </div><br /><span style="font-family: Courier New, Courier, monospace;">load(interpol);</span><br /><div><span style="font-family: Courier New, Courier, monospace;">f(x) := charfun2(x,-inf,1)*x^2 + charfun2(x,1,inf)*x^3;</span><br /><span style="font-family: Courier New, Courier, monospace;">plot2d(f(x),[x,-1,2]);</span><br /><br />One nice thing about these expressions is that it is possible to create them separately and add them together later and get a relatively normal Maxima expression. (There are some hiccups if you try to differentiate or integrate them.) I can't comment on the performance of the characteristic function approach versus the if-then-else approach, but the characteristic function approach may be more manageable to produce.<br /><br /><span style="font-size: large;"><b>The Program</b></span><br />There are a number of details that don't specifically relate to this problem that I will not go into detail about and other details I have given in the two previously mentioned posts on bulge factors. Here is the complete code for producing the parametric equations and plotting them followed by the result for a specific example polyline.<br /><br /><textarea cols="80" rows="4" wrap="off">:lisp (defun |$MINLENGTHP| (maxima-list min-length) (loop for p in maxima-list for i = 1 then (1+ i) if (> i min-length) return t))</textarea><br /><br /><textarea cols="90" rows="65" wrap="off">load(interpol)$ maperror: false$ mapprint: false$ ratprint: false$ justpoint(u) := [u[1],u[2]]$ turn_right(direction) := [direction[2], -direction[1]]$ rotation_sign(dirA, dirB) := -signum(justpoint(dirA) . turn_right(dirB))$ deflection_sign(a,b,c) := rotation_sign(b-a,c-b)$ magnitude(v) := sqrt(v . v)$ polyseglength(a,b) := block([run,radius,bulge,centralAngle], run: magnitude(justpoint(b-a)), bulge: a[3], if bulge = 0 then run else ( radius: run/4*(1/bulge + bulge), centralAngle: atan(bulge)*4, centralAngle*radius))$ /* negative bulge does not cause an issue for this calc; comes out in the wash */ /* returns [center,radius,rise,run,altitude,bulge_normal] */ arc_everything(a,b) := block([dir,u,norm,bulge,s,rise,radius,alt], dir: justpoint(b-a), u: magnitude(dir), norm: turn_right(dir)/u, bulge: a[3], s: signum(bulge), rise: abs(bulge)*u/2, radius: u/4*(1/abs(bulge) + abs(bulge)), alt: radius - rise, [(a+b)/2 - s*alt*norm,radius,rise,u,alt,norm])$ parametric_line(A,B,start,parameter) := justpoint(A) + justpoint(B-A)/magnitude(justpoint(B-A)) * (parameter - start)$ parametric_arc(A,B,start,parameter) := block([center,radius,rise,run,altitude,bulge_normal,sig,theta_i,theta], [center,radius,rise,run,altitude,bulge_normal]: arc_everything(A,B), sig: signum(A[3]), [x,y]: (justpoint(A)-center)/magnitude(justpoint(A)-center), theta_i: atan2(y,x), theta: theta_i + sig * (parameter - start)/radius, center + radius*[cos(theta),sin(theta)])$ parametric_polyline(polyline,parameter) := block([cumlen,cumlen2,current,poly], cumlen: 0.0, current: polyline, poly: 0, while minlengthp(current,2) do ( term: if current[1][3] = 0 then parametric_line(current[1],current[2],cumlen,parameter) else parametric_arc(current[1],current[2],cumlen,parameter), cumlen2: cumlen + polyseglength(current[1],current[2]), poly: poly + charfun2(parameter,cumlen,cumlen2)*term, cumlen: cumlen2, current: rest(current)), [parametric,poly[1],poly[2],[parameter,0.0,cumlen-10^-8]])$ pl: [[0,0,0],[0.2805351111430803,0.2805351111430803,-0.1989123673796571], [0.4568168316928867,0.3535533905932737,0],[2.854253390593276,0.3535533905932737,-0.4142135623731028], [3.103553390593274,0.1042533905932738,0],[3.103553390593274,-3.0,-1.0],[0.0,-3.0,0.5], [1.5,-2.0,0]]$ p: parametric_polyline(pl,t)$ plot2d([p],[same_xy,true])$</textarea></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-tpccfjcSv3c/Ve3jpkx-o9I/AAAAAAAAEdA/sDK_VUKkOY4/s1600/capture10.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-tpccfjcSv3c/Ve3jpkx-o9I/AAAAAAAAEdA/sDK_VUKkOY4/s1600/capture10.PNG" /></a></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-19875122740407913332015-08-22T18:47:00.002-05:002015-09-19T00:24:35.838-05:00Polylines: Radius-Bulge TurnaroundThe dxf representation of LWPOLYLINEs represents arcs using the end points of the arc and a bulge factor which is the tangent of a quarter of the central angle of the arc (see <a href="http://darrenirvine.blogspot.com/2014/09/bulge-value-code-42-in-lwpolyline.html">this previous post</a>). We want to interchange between the radius and the bulge factor (both ways). Let's pull up the earlier diagram.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-a7h3fx2sQ0w/VBTyjsR-cLI/AAAAAAAAEIE/xc70GmyaU2c/s1600/Capture7.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="478" src="http://4.bp.blogspot.com/-a7h3fx2sQ0w/VBTyjsR-cLI/AAAAAAAAEIE/xc70GmyaU2c/s640/Capture7.PNG" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>Looking at the diagram, we can produce several useful formulas. Taking the bulge factor as \(b\), observe that<br />\[b = \tan {\theta \over 4} = {i \over u/2} = {2 i \over u}\tag{1},\]<br />where we are temporarily ignoring the issue of sign.<br /><div>The distance \(u\) is simply the distance from \(A\) to \(B\) and these point are always given to us in the context of a polyline whether we are given the bulge factor or the radius. We write this as </div><div>\[u = ||B-A||.\tag{2}\]</div><div>Using the Pythagorean theorem we have<br />\[r^2 = {u^2 \over 4} + a^2 .\tag{3}\]<br />Using the substitution \(a=r-i\), we obtain<br />\[r^2= {u^2 \over 4} + (r-i)^2 = {u^2 \over 4} + r^2 - 2 r i + i^2\]<br />and so<br />\[2 r i = {u^2 \over 4} + i^2.\tag{4}\]<br /><br /></div><b>Radius/bulge direction <span style="font-family: Courier New, Courier, monospace;">=></span> Bulge factor</b><br /><br />Using equation (3), we can solve for \(a\)<br />\[a = \sqrt{r^2 - \frac{u^2}{4}}.\]<br />From this we can obtain \(i = r - a\) which allows us to calculate \(b\) according to equation 1. Additionally, set the sign of \(b\) to be positive for a CCW arc and negative for a CW arc. Alternatively stated, \(b\) is positive if the peak of the arc is a right offset from ray \(\overrightarrow{AB}\) and negative if the peak is a left offset from ray \(\overrightarrow{AB}\). In our diagram above, if point A comes before point B in the order they occur in the polyline, then we have a CW arc (P offset to the left of \(\overrightarrow{AB}\)) and so \(b\) should be negative. A negative bulge value indicates bulging to the left and a positive value indicates bulging to the right.<br /><br /><b>Bulge factor </b><b><span style="font-family: Courier New, Courier, monospace;">=></span></b><b> Radius/bulge direction</b><br /><br />We calculate \(u\) from \(A\) and \(B\) and then substitute equation 1 (solved for \(i\)) into equation 4 to get<br />\[2 r \frac{b u}{2} = \frac{u^2}{4} + \frac{b^2 u^2}{4}\]<br />\[4 r b = u + b^2 u\]<br />\[r = u \left(\frac{1 + b^2}{4 b}\right).\]<br />Note that we would actually use the absolute value of \(b\) to obtain a positive radius. We can use the same logic as above for determining CW versus CCW (it's "iff" logic).<br /><br /><b>Locating the Center Point</b><br /><br />Whether we are given the radius or bulge factor, we may want to find the center point. From above, we know we have \(u\), \(i\), \(b\), \(r\), and \(a\) all easily available to us if we need them.<br /><br />We take our unit vector for the direction of the line segment taken from \(A\) to \(B\) as \(d = (B-A)/u\). Then the normal direction (90° right turn for offsetting) is given by<br />\[N = (d_y, -d_x).\]<br />Taking \(\sigma = {\rm signum}(b)\),<br />\[C = \frac{A+B}{2} - \sigma a N\]<br />\[P = \frac{A+B}{2} + \sigma i N.\]<br /><br />In my <a href="http://darrenirvine.blogspot.com/2015/09/plotting-lwpolyline-in-maxima.html">next post</a>, I show a way to implement this in Maxima.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-46664340737038386612015-05-30T14:02:00.000-05:002015-05-30T19:10:01.137-05:00>> Imitation Forward Pipe for Common LispSince learning a little F# about a year ago, I took a liking to its forward pipe operator (|>). The semantics of the forward pipe work great with F#'s sequence data type (with so-called, "lazy" computation) and functional style programming. It allows you to express a progression of consecutive steps in the order they will be done, while allowing you to directly feed the results of the last function into the input of the next function, avoiding redundant variable declarations. I suppose you could call these types of consecutive actions "transitive" combinations of operations. Imperative style programming would preserve the apparent sequence of events (meaning, the order you type the operations in, is the order the operations happen in), but it also involves declaring intermediate variables, which perhaps you will only use once.<br /><div><br />I'm not an expert on functional style programming and don't intend, right now, to explain or justify its merits, except to say that sometimes it produces cleaner, clearer code. But while I have a current preference for Common Lisp above most other (programming) languages at the present, I really like that forward pipe operator in F#. This morning, I thought it was time to cross that bridge—try to make an imitation forward pipe in Common Lisp.</div><div><br /></div><div>In F#, if you wanted to turn a into b into c, you might do something like<br /><br /><span style="font-family: Courier New, Courier, monospace;">let a2c x =</span><br /><span style="font-family: Courier New, Courier, monospace;"> a2b x</span><br /><span style="font-family: Courier New, Courier, monospace;"> |> b2c</span><br /><br />where b2c also takes one parameter. In Lisp, you end up with something more like</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">(defun a->c (x)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (b->c (a->b x)))</span></div><div><br /></div><div>This is fine for being concise but it doesn't preserve the (apparent) logical ordering of \(a\to b\to c\). We could do that with a change to something more imperative, such as</div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">(defun a->c (x)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (let ((b (a->b x)))</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (b->c b)))</span></div></div><div><br /></div><div>and this works okay. However, I would like to write<br /><br /><span style="font-family: Courier New, Courier, monospace;">(defun a->c (x)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (>> </span><br /><span style="font-family: Courier New, Courier, monospace;"> (a->b x)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (b->c)))</span><br /><br />Or,<br /><br /><span style="font-family: Courier New, Courier, monospace;">(defun a->e (x)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (>> :pipe-in</span><br /><span style="font-family: Courier New, Courier, monospace;"> (a->b x)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (b->c)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (c->d :pipe-in 0.0625d0)</span><br /><span style="font-family: Courier New, Courier, monospace;"> (d->e)))</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span>where we want this last one to expand into<br /><br /><span style="font-family: Courier New, Courier, monospace;">(D->E (C->D (B->C (A->B X)) 0.0625))</span><br /><br />I've introduced a keyword so that we can dictate which parameter of the next function the previous result goes into. You can specify any keyword to use as a pipe-in parameter, but make sure you don't use a keyword of one of the functions being called in the sequence—it'll really mess with your mind. Here's the macro definition along with an error condition:<br /><br /></div><textarea cols="80" rows="17" wrap="off">(define-condition received-bad-pipe-keyword (error) ((text :initarg :text :reader text))) (defmacro >> (opt-symbol &rest li) (destructuring-bind (pipe-sym lis) (if (keywordp opt-symbol) (list opt-symbol li) (if (listp opt-symbol) (list :pipe-in (cons opt-symbol li)) (error 'received-bad-pipe-keyword :text "Received bad pipe keyword."))) (loop for i on lis unless (cdr i) do (return (car i)) do (if (find pipe-sym (cadr i)) (nsubstitute (car i) pipe-sym (cadr i) :count 1) (nconc (cadr i) (list (car i)))))))</textarea><br /><br />To test that the code works, we can try this at the REPL,<br /><br /><span style="font-family: Courier New, Courier, monospace;">CL-USER> (macroexpand-1 '(>> :pipe-in</span><br /><span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> (a->b x)</span><br /><span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> (b->c) </span><br /><span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> (c->d :pipe-in 0.0625) </span><br /><span style="font-family: Courier New, Courier, monospace;"><span class="Apple-tab-span" style="white-space: pre;"> </span> (d->e)))</span><br /><span style="font-family: Courier New, Courier, monospace;">(D->E (C->D (B->C (A->B X)) 0.0625))</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span>This macro does not attempt to handle multiple return values—though such a development might lead to an interesting result. It might be as easy as introducing more keywords.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-88862261248009747902015-04-04T14:59:00.000-05:002015-04-04T14:59:06.634-05:00Dynamic Zoom in LispWorks' CAPIUsing the LispWorks' CAPI, I have implemented a simple program to demonstrate the basic elements in the implementation of a 2D dynamic zoom feature. The program is basically uninteresting except for the dynamic zoom implementation. You can start with the code immediately, or skip to the other side for a few comments and refer back.<br /><br /><textarea cols="90" rows="73" wrap="off">(in-package :cl-user) ;; to run this program, compile and load into LispWorks and at the listener execute: ;; (capi:contain (make-instance 'dynazoom:dynazoom)) (defpackage :com.blogspot.darrenirvine.dynamiczoom (:nicknames :dynazoom) (:use :common-lisp :capi) (:export dynazoom)) (in-package :dynazoom) (defparameter *starts* '((10.0d0 . 10.0d0))) (defparameter *center* (cons 200.0d0 200.0d0)) (defparameter *scroll-step-multiplier* 1.02d0) (defparameter *scale* 1.0d0) (defparameter *mouse-x* 0.0d0) (defparameter *mouse-y* 0.0d0) (defparameter *top* 0.0d0) (defparameter *left* 0.0d0) (defun display-lines (pane x y w h) (declare (ignore x y w h)) (let ((cx (* *scale* (- (car *center*) *left*))) (cy (* *scale* (- (cdr *center*) *top* )))) (loop for s in *starts* for sx = (* *scale* (- (car s) *left*)) for sy = (* *scale* (- (cdr s) *top* )) do (gp:draw-line pane sx sy cx cy)))) (defun change-start (self x y) (push (cons (+ (/ (coerce x 'double-float) *scale*) *left*) (+ (/ (coerce y 'double-float) *scale*) *top*)) *starts*) (gp:invalidate-rectangle self)) (defun change-center (self x y) (setf *center* (cons (+ (/ (coerce x 'double-float) *scale*) *left*) (+ (/ (coerce y 'double-float) *scale*) *top*))) (gp:invalidate-rectangle self)) (defun change-scale (self direction operation scroll-amount &key interactive (do-other-scroll t) &allow-other-keys) (declare (ignore direction operation interactive do-other-scroll)) (let* ((si *scale*) (sf (* si (expt *scroll-step-multiplier* (- scroll-amount)))) (sc (/ (- sf si) (* sf si)))) (setf *scale* sf) (incf *left* (* sc *mouse-x*)) (incf *top* (* sc *mouse-y*)) (gp:invalidate-rectangle self))) (defun change-mouse-position (self x y) (declare (ignore self)) (setf *mouse-x* x *mouse-y* y)) (define-interface dynazoom () () (:panes (viewer output-pane :title "main screen" :visible-min-height 200 :visible-min-width 200 :display-callback 'display-lines :scroll-callback 'change-scale :draw-with-buffer t :drawing-mode :quality :input-model '(((:button-1 :press) change-start) ((:button-3 :press) change-center) ((:motion) change-mouse-position)) :reader viewer-pane)) (:layouts (row-with-editor-pane row-layout '(viewer))) (:default-initargs :title "Dynazoom"))</textarea><br /><br />Every time the user clicks the left button (<b>:button-1</b>), the mouse location is trapped and turned into a "model" location for a line starting position. If the user presses the right button (<b>:button-3</b>) the common terminal point for all of the lines is changed—the same principle of storing a "model" point is followed. Basically it is a spider with as many legs as you want originating from whatever point you select.<br /><br />When the program commences, the top-left corner is (0, 0) and view scale is 1:1. The rolling of the scroll wheel of the mouse will change both of these parameters by scaling relative to the current location of the mouse pointer. Since the scroll event does not appear to provide a means to get the current mouse pointer location, I am tracking the mouse <b>:motion</b> "gesture" (basically an "event") and storing the pointer location, in screen coordinates, as is moves.<br /><br />The <span style="font-family: Courier New, Courier, monospace;">change-start</span> (I guess I should have said "add-start"—ah, scope creep) and <span style="font-family: Courier New, Courier, monospace;">change-center</span> handler functions do basically the same thing. Using the current mouse position (these two handlers <i>do</i> receive mouse position information), determine the model point corresponding to the screen position of the mouse. The top-left corner is an actual ("model") location. So the model location that corresponds to the given screen coordinates is the top-left corner plus the model distance between the mouse pointer and the top left corner of the screen. Thus, \(M' = (M/s + TL)\), where \(M'\) is the model location. We manipulate the same formula to obtain screen coordinates (\(M\)) from the model coordinates when we display to the screen.<br /><br />The part that is a little trickier is to adjust the top-left corner when dynamic zoom is applied so that you can maintain the mapping between model and screen coordinates. This is an important part of <span style="font-family: Courier New, Courier, monospace;">change-scale</span>'s job. Our basic formula is<br />$$TL_i + A_i = TL_f + A_f,$$ where \(TL_i, TL_f\) are the initial and final top-left corners (i.e., before and after zoom) and \(A_i, A_f\) are the initial and final distances between the model position of the mouse and the top-left corner. In other words, zooming does not change the mouse location in either screen or model coordinates. Hence,<br />$$TL_f = TL_i + A_i - A_f = TL_i + M {{s_f - s_i}\over{s_f s_i}},$$where \(s_i, s_f\) are the initial and final scale factors.<br /><br />LispWorks Personal Edition for non-commercial use is available here: <a href="http://www.lispworks.com/downloads/">http://www.lispworks.com/downloads/</a>.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-12069646632522179942015-01-02T20:06:00.001-06:002015-04-22T19:27:38.699-05:00Get Your Text Data Into MaximaComma Separated Value (CSV) files are often a convenient way to move your text data between a variety of software products. The makeup of a CSV file is straightforward and can be observed by opening up a CSV file in a text editor. Excel provides the ability to save a worksheet to a CSV file.<br /><div><br /></div><div>Once your data is in a CSV file, you can read it into <a href="http://maxima.sourceforge.net/">Maxima</a>—with a little bit of code.<br /><br />But, first, some house-keeping. In order to have convenient access to routines that you or someone else has written, you'll want to put the files that contain this code on Maxima's search path. The variables <span style="font-family: Courier New, Courier, monospace;">file_search_maxima</span> and <span style="font-family: Courier New, Courier, monospace;">file_search_lisp</span>, indicate the paths that Maxima will look through to find code to load. You can affect these values by modifying these during start-up (or any other run-time). Place a file called maxima-init.mac in the location indicated by the environment variable <span style="font-family: Courier New, Courier, monospace;">MAXIMA_USERDIR</span> and place some code to the effect of the following in it:<br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">file_search_maxima: </span><br /><span style="font-family: Courier New, Courier, monospace;"> append(file_search_maxima, </span><br /><span style="font-family: Courier New, Courier, monospace;"> ["fullpathto/Useful/Maxima/Functions/$$$.{mac,mc}"]);</span><br /><span style="font-family: Courier New, Courier, monospace;">file_search_lisp: </span><br /><span style="font-family: Courier New, Courier, monospace;"> append(file_search_lisp, </span><br /><span style="font-family: Courier New, Courier, monospace;"> ["fullpathto/Useful/Maxima/Functions/$$$.{o,lisp,lsp}"]);</span><br /><br />(See <a href="http://maxima.sourceforge.net/ui-tips.html">here</a> for more settings and see chapter 32. Runtime Environment.)<br /><br />Download the following two files and place them in the directory you have added to your search path.<br /><ol><li><a href="http://www.drivehq.com/file/df.aspx/publish/dsisammy/hemi/csv-l.lisp">CSV-L.lisp</a></li><li><a href="http://www.drivehq.com/file/df.aspx/publish/dsisammy/hemi/csv.mac">CSV.mac</a></li></ol><div>Start up Maxima and then type and execute:</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">load(csv);</span></div><div><br /></div><div>You will now have access to 4 functions to help you get your text data into Maxima.</div><div><br /><hr /></div><div><span style="font-family: Courier New, Courier, monospace;"><hrule></hrule></span></div><div><u>Function</u>:<br /><span style="font-family: Courier New, Courier, monospace;">CSVread(filename)</span></div><div><span style="font-family: Courier New, Courier, monospace;">CSVread(filename,linesToSkip)</span></div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">CSVread</span><span style="font-family: inherit;"> </span>will read in the data in the file and turn it into a list of lists of string data. Each line of data becomes a separate list. Optionally, you may specify a positive integer value indicating how many rows to discard, which is handy if your data file contains (possibly multiple) header rows.</div><div><br /><hr /></div><div><u>Function:</u><br /><span style="font-family: Courier New, Courier, monospace;">CSVNumberColumns(filename, skips, columnsToKeep)</span></div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">CSVNumberColumns</span><span style="font-family: inherit;"> </span>reads data in from the file specified by <span style="font-family: Courier New, Courier, monospace;">filename</span><span style="font-family: inherit;"> </span>and skips <span style="font-family: Courier New, Courier, monospace;">skips</span> rows of data (which may be 0). Additionally, it will excerpt the file by only retaining the columns requested in the list <span style="font-family: Courier New, Courier, monospace;">columnsToKeep</span>. All data in the columns which are to be retained is expected to be a valid Maxima expression in string form. Numeric or algebraic expressions qualify. Specifically, the data items must be something which the Maxima function <span style="font-family: Courier New, Courier, monospace;">parse_string</span> can interpret.</div><div><br /><hr /></div><div><u>Function:</u><br /><span style="font-family: Courier New, Courier, monospace;">MakeMatrix(d)</span></div><div><br /><span style="font-family: Courier New, Courier, monospace;">MakeMatrix</span> creates a matrix with the data <span style="font-family: Courier New, Courier, monospace;">d</span>, which is a list of lists. This is handy when you need your data in matrix form to pass along to a least squares function in Maxima, such as <span style="font-family: Courier New, Courier, monospace;">lsquares_estimates</span>. Note that this function does not create a "deep" copy of the data in <span style="font-family: Courier New, Courier, monospace;">d</span>. It is really just a separate "view" of the same data.<br /><br /><hr /><br /><u>Example:</u><br /><br /><span style="font-family: Courier New, Courier, monospace;">load(csv)$ /* only needed once per Maxima start-up */</span><br /><span style="font-family: Courier New, Courier, monospace;">d: CSVNumberColumns("D:/path/circledata.csv",1,[2,3]);</span><br /><span style="font-family: Courier New, Courier, monospace;">MakeMatrix(d);</span><br /><span style="font-family: Courier New, Courier, monospace;"><br /></span><span style="font-family: Courier New, Courier, monospace;">(%o9) [[1007.265919151515,1000.932075151515],[1008.902086,1002.038759],[1010.969034,1002.554592],[1013.1443,1002.268315],[1015.598881,1000.804232],[</span><br /><span style="font-family: Courier New, Courier, monospace;">1016.837804,998.9016388],[1017.35833,996.4303561],[1016.548809,993.5742165],[1014.687232,991.6508387],[1012.069706,990.6322456],[1009.539441,990.933599],[</span><br /><span style="font-family: Courier New, Courier, monospace;">1007.03337,992.4675972],[1005.545131,995.07334],[1005.520783,997.8698301]]</span><br /><span style="font-family: Courier New, Courier, monospace;">(%o10) matrix([1007.265919151515,1000.932075151515],[1008.902086,1002.038759],[1010.969034,1002.554592],[1013.1443,1002.268315],[1015.598881,1000.804232],[1016.837804,998.9016388],[1017.35833,996.4303561],[1016.548809,993.5742165],[1014.687232,991.6508387],[1012.069706,990.6322456],[1009.539441,990.933599],[1007.03337,992.4675972],[1005.545131,995.07334],[1005.520783,997.8698301])</span></div><div><br /></div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-83079330081923753772014-12-13T11:31:00.000-06:002015-04-22T19:40:43.411-05:00What is the Sine of 18°?To find the exact values of sine (and cosine) of a given angle, you need to use identities. This particular value is interesting in that it takes a fair bit of work to arrive at this value analytically, and yet the value is expressible as a fairly simple radical. The first thing to notice is that 90 = 5(18). You probably know the half angle identity, but not a fifth angle identity. And for good reason—the fifth angle identity has multiple solutions; it's a fifth degree polynomial.<br /><br />I did this calculation manually the first time through—some years ago. It goes a lot easier with a computer algebra system. Today, I'm going to use Maxima to solve the problem.<br /><br />An identity which comes up in the development of the complex numbers, gives a convenient way of expressing \(\sin {n\theta}\) in terms of combinations of \(\sin \theta\) and \(\cos \theta\).<br /><br />De Moivre's formula:<br />\[r^n (\cos \theta + i \sin \theta)^n = r^n (\cos {n\theta} + i \sin {n\theta}).\]<br />So, if we take \(r=1\) and \(n=5\), we can do a binomial expansion and easily isolate the \(sin \theta\) and \(\cos \theta\). In a Maxima cell, I type<br /><br /><span style="font-family: Courier New, Courier, monospace;">eq: (cos(%theta) + %i*sin(%theta))^5 = cos(5*%theta) + %i*sin(5*%theta);</span><br /><span style="font-family: Courier New, Courier, monospace;">realpart(eq);</span><br /><span style="font-family: Courier New, Courier, monospace;">imagpart(eq);</span><br /><br />After executing, I obtain the fifth angle identities (if you want to call them that).<br />\[\mathrm{cos}\left( 5\,\theta\right) =5\,\mathrm{cos}\left( \theta\right) \,{\mathrm{sin}\left( \theta\right) }^{4}-10\,{\mathrm{cos}\left( \theta\right) }^{3}\,{\mathrm{sin}\left( \theta\right) }^{2}+{\mathrm{cos}\left( \theta\right) }^{5}\]<br />\[\mathrm{sin}\left( 5\,\theta\right) ={\mathrm{sin}\left( \theta\right) }^{5}-10\,{\mathrm{cos}\left( \theta\right) }^{2}\,{\mathrm{sin}\left( \theta\right) }^{3}+5\,{\mathrm{cos}\left( \theta\right) }^{4}\,\mathrm{sin}\left( \theta\right) \]<br />To eliminate \(\cos^2 \theta\) from the sine identity we use<br /><br /><span style="font-family: Courier New, Courier, monospace;">subst([cos(%theta)=sqrt(1-sin(%theta)^2)],%);</span><br /><span style="font-family: Courier New, Courier, monospace;">expand(%);</span><br /><br />and obtain<br />\[\mathrm{sin}\left( 5\,\theta\right) =16\,{\mathrm{sin}\left( \theta\right) }^{5}-20\,{\mathrm{sin}\left( \theta\right) }^{3}+5\,\mathrm{sin}\left( \theta\right). \]<br />Taking \(\theta=18°\), using \(x\) to stand in for \(\sin 18°\), and rearranging, we have<br />\[0 = 16\,{x}^{5}-20\,{x}^{3}+5\,x - 1.\]<br />Factoring this one manually was made easier by recognizing \(x = 1\) as a root. After that I tried forming a square free polynomial and found that there was in fact a squared polynomial involved. In general, this helps in a couple ways (potentially):<br /><ol><li>Reduces the degree of the polynomial to be factored.</li><li>Sometimes produces fully reduced factors as a by-product of forming the square free polynomial.</li><ol><li>This happens when each repeated factor is of different degree; that is, occurs a different number of times.</li></ol></ol><div>(Square free means no factors left that are raised to some power—each factor occurs exactly once.) Doing this manually involves taking the greatest common divisor between a polynomial and it's derivative. But today, let's let Maxima do it.</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">factor(0=16*x^5-20*x^3+5*x - 1);</span></div><div>\[0=\left( x-1\right) \,{\left( 4\,{x}^{2}+2\,x-1\right) }^{2}.\]</div><div>At this point, you really do have to be exceedingly lazy to not continue with the rest of the work manually. It's just a quadratic formula to be applied now (we discard \(x=1\) as an artifact). To use a computer algebra system for the rest is...well...okay, it <i>is </i>Saturday, so...</div><div><br /></div><div>solve(0=4*x^2+2*x-1,[x]);</div><div>\[[x=-\frac{\sqrt{5}+1}{4},x=\frac{\sqrt{5}-1}{4}]\]</div><div>We reject the negative solution (which is \(\sin {-54°}\) and, interestingly, is the negative of half the Fibonacci number) on the grounds that the range of the sine function on our interval is positive. Hence,</div><div>\[\sin 18° = \frac {\sqrt{5} - 1}{4}.\]</div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-63635664507870648412014-11-12T23:28:00.000-06:002014-11-12T23:28:43.858-06:00The Man, the Boy, and the DonkeyA criticism for every alternative. And praise for none.<br /><br />I suppose the story of the man, the boy, and the donkey comes up in everybody's life, and more frequently than they like. It is as timeless as the complaining nature of humanity. When you're looking at the alternatives, and you don't like any of them, it's something you think about. I think about it even more when the option I like best is probably the more expensive option—at least, in the short run.<br /><br />This is an issue that comes up in software. I'm thinking, in particular, of re-factoring. Re-factoring feels good. It makes your code base cleaner, more maintainable, and makes further work on a given project easier. But it may not produce the immediate, visible results that may be expected of you. I sometimes face the choice between boiler plate code and re-factoring. I prefer re-factoring. It appeals to the abstract intellect. Make something hard, easier. But honestly, if I just need to get project Y done, sometimes it's faster to copy project X and tweak a few things.<br /><br />And both choices can be criticized:<br /><ul><li>"Why do you have this repetitive code?" (Sometimes saves time in the short run.)</li><li>"Why did you waste all that time tweaking code that already works?" (To make future modification and additions, or even entire projects, easier.)</li></ul><div>Now, how was the repetitive code born? You solved problem X for a specific aspect of your program. There was only one and/or you were feeling your way through the problem. Perhaps a challenging problem, uncharted territory, documentation problems. A lot of experimentation. And you're not yet clear on what the best way to break it all down is.</div><div><br /></div><div>But now you need to solve problem Xa. Similar, but different enough that the same code won't do the job. So, you grab the code for problem X and tweak it. Done. Move on.</div><div><br /></div><div>Sooner or later, perhaps you need to solve problem Xb. Same song, same verse, a little bit louder and a little bit worse. Now you wish you re-factored earlier. On the other hand, now you have three problems that are similar and you have material to work with to abstract into the kinds of patterns you need to see in order to re-factor. You know that problem X, Xa, and Xb are similar in an intuitive way, but maybe you needed to see them up close and personal to compare them and decide how to generalize so as to make the most useful subroutines for future work. And seeing as you have X and Xa working, a code re-factor on these areas of code could make a pretty good testing ground for the new subroutines. Hopefully, you can introduce one main modification at a time to these and be fairly confident, that if something's not working, the problem is with the newly introduced (re-factored) code. You already have (in measure) isolation of the (potential) problem area. (Isolation, the troubleshooter's creed.)</div><div><br /></div><div>So, having re-factored X and Xa with Xb in mind, you're well on your way to having Xb solved. But we don't think that way under deadline scenarios or when we're fixated on some incentive. Abstracting, generalizing, fine-tuning, posing hypotheses and testing them. These are creative processes. And they don't directly produce the end result to solve Xb. And it might be faster to boiler plate it. But if you invest in a re-factor now, it is likely to improve your delivery time on Xc, Xd, etc.</div><div><br /></div><div>I suppose, knowing when to sharpen the ax is a judgement call.</div><div><br /></div><div>One of the strategies that helps me get well factored code earlier in the game is a REPL (Read Execute Print Loop). Now that I know what they are, I sorely miss them when they are not available for my project. With a REPL, you would approach the solving of problem X with a view to making it easy to solve that particular problem. If you can think of generalizations right then and there, so much the better. But rather than having the whole program together and testing it in one big mess, you can fairly easily test little bits and pieces of the program and get them working. This helps you "feel" through a problem and see, more easily, how to break the problem up into pieces. It gives you quick turnaround on testing ideas.</div><div><br /></div><div>It doesn't guarantee you'll land on something that will make solving Xa easier (perhaps you don't even know about problem Xa yet), but it improves your odds. And just maybe you'll more easily remember and understand that great solution you came up with for problem X and have an easier time making appropriate modifications. (Sometimes reading old code is like reading old notes that aren't as clear as you thought they were when wrote them.)</div><div><br /></div><div>A REPL helps you feel your way through a problem, do experiments (dealing with uncertainty), test components separately (isolation), and produce modular solutions.</div><div><br /></div><div>From a project management perspective it is:</div><div><ul><li>A planning tool</li><li>A risk management tool</li><li>A monitoring and controlling tool</li><li>Not to mention, an execution tool</li></ul><div>REPLs are common place for F# and Common Lisp.</div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-53375346451305682602014-10-04T00:41:00.000-05:002014-12-25T16:43:11.155-06:00Merging Uniquishly in Common LispI've been working on some code with dxf files and I needed to have some generalized merging. In particular, for my first function, I wanted to merge headers without creating duplicate entries. My reader sorts the header values, so when I go to merge I just need to have a simple merge that looks at each of two lists and collects the lowest value and only advances the list that had an element "consumed" (as in, I collected it). If the values are equal, it should collect one instance, and consume from both lists (i.e., advance—take the cdr of—both lists).<br /><br />The approach I've taken does not guarantee a list of unique items, only that if the original lists are sorted and contain no duplicates, then the resulting list will be sorted and contain no duplicates. It is also non-destructive, returning a new list<br /><br />(Confession: Actually, I did it the hard way the first time. Custom code that can only do one thing. Now I'm going back and amending my ways as an ardent Lisper.)<br /><br />I went with a loop macro for one reason: the collect key word. This is one of the things that keeps me coming back to the loop macro. I like it overall, but when I'm struggling with the loop macro, I sometimes want to use the <b>do* </b>macro. But I dislike the way <b>do* </b>lends itself to <b>cons</b>ing and a final call to <b>reverse</b>. Hence, I come back whimpering to the loop macro.<br /><br />The code:<br /><br /><textarea cols="80" disabled="true" rows="24">(defgeneric non-duplicating-merge (a b triple-test) (:documentation "Non-destructive merge.")) (defmethod non-duplicating-merge ((a list) (b list) tri-test) (loop for (alist blist) = (list a b) then (list nexta nextb) for ait = (car alist) for bit = (car blist) for nexta = alist for nextb = blist for tri-test-res = (funcall tri-test ait bit) if (< tri-test-res 0) collect ait into clist and do (setf nexta (cdr alist)) else if (> tri-test-res 0) collect bit into clist and do (setf nextb (cdr blist)) else collect ait into clist and do (setf nexta (cdr alist) nextb (cdr blist)) end when (not nexta) append nextb into clist and return clist when (not nextb) append nexta into clist and return clist))</textarea><br /><br />Here's some sample output from an EMACS session:<br /><br /><textarea cols="80" disabled="true" rows="3">CL-USER> (non-duplicating-merge (list 1 2 5) (list 1 2 4) #'-) (1 2 4 5)</textarea><br /><br />The custom code solution was 23 lines and not so pleasant looking. The tri-test follows the custom of using a negative value to indicate a < b, positive for a > b, and 0 for a = b. For numeric data, subtraction is the natural choice for a tri-test.<br /><br />I tried using some <b>for</b> keywords inside of <b>if</b> statements and my horse (compiler) wouldn't make the jump. But that's okay, I used a <b>for</b> statement at the beginning to initialize some variables (nexta and nextb) which I could then <b>setf</b> within <b>do</b> clauses as necessary.<br /><br />The more I use the loop macro, the more I like it. Eventually, maybe I will discover all of the things that cause it to choke, stop doing those things, and then I will like it even more.Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-14602528920942963372014-09-13T22:37:00.001-05:002015-08-22T19:21:52.390-05:00Bulge Value (code 42) in LWPolylinePolylines are a pretty handy way to represent and manipulate outlines of objects in CAD software. However, the very robust polyline entity can do so many things that it is somewhat complicated. The light weight polyline (LWPolyline) entity is a trimmed down version of polylines that does the most frequently needed things fairly compactly and straight-fowardly.<br /><br />LWPolylines are strictly 2D objects. Of course, they can be created in a UCS other than the WCS but they will be planar, and the points stored in them will be 2D points relative to their object coordinate system (OCS—I haven't really worked with OCSs so I'm not much help other than to tell you to learn the Arbitrary Axis Algorithm described in the free DXF Reference from Autodesk, which I've never actually learned properly).<br /><br />LWPolylines still have the ability to represent arcs. Actually, I don't know how these are represented in Polylines, but in LWPolylines, arcs are fairly simple. All you need is the bulge value (code 42) which is tan(θ/4), where θ is the central angle.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-wd5TmPVRhCg/VBTrs5zokqI/AAAAAAAAEH0/LcyQ7CaFYZc/s1600/Capture6.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="http://2.bp.blogspot.com/-wd5TmPVRhCg/VBTrs5zokqI/AAAAAAAAEH0/LcyQ7CaFYZc/s1600/Capture6.PNG" width="285" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig.1. θ is the central angle.</td></tr></tbody></table>Which raises an interesting question: why for do we need the tangent of a <i>quarter </i>of the central angle?<br /><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-9QeZPu-5its/VBUNJZdHm3I/AAAAAAAAEIU/kaIUTjIF0QM/s1600/Capture9.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="484" src="http://1.bp.blogspot.com/-9QeZPu-5its/VBUNJZdHm3I/AAAAAAAAEIU/kaIUTjIF0QM/s1600/Capture9.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 2. If the arc above was drawn from A to B, the bulge value would be negative (CW).</td></tr></tbody></table>One reason is that it is compact. Other than storing points A and B, which you already need, the only additional information to unambiguously denote the arc from A to B is the bulge value. If the radius was stored instead, we would have up to 4 different arcs that could be being referred to. The SVG representation requires you to enter two parameters after the radius that clarify which of the four arcs you want. SVG is trying to be human reader friendly, but in DXF, we're not as worried about that. I find it easier to deal with the bulge as far as computations are concerned.</div><div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-8BLD69AQ7wM/VBT19-xkzII/AAAAAAAAEII/eqC0SUuLeE0/s1600/Capture8.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="377" src="http://1.bp.blogspot.com/-8BLD69AQ7wM/VBT19-xkzII/AAAAAAAAEII/eqC0SUuLeE0/s1600/Capture8.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Fig. 3. Four possible arcs of a given radius drawn from A to B. Which one do you want?</td></tr></tbody></table><div>I imagine the choice of storing the tangent of the angle is computation speed. The choice of a quarter of the angle is probably for two reasons:</div><div><ol><li>Tangent has a period of 180°. Now, if you used the tangent of half the angle subtended, this might appear initially to be sufficient, except that you would not be able to use the sign of the value to decide between CW and CCW since you would need the sign to decide between less than or greater than 180° arcs. You would also have an asymptote get in the way of drawing semi-circles. By storing the tangent of a quarter of the arc subtended, we restrict the the domain to angles which yield a positive tangent and so the sign value can be used for something else. It also avoids data representation limitations until you get close to having a complete circle (at which point, the user should probably consider using a circle). I don't know what the solution technically would be for storing a full circle other than a sentinel value or just not letting the user do that—don't know what happens.</li><li>The diagram above shows some handy interrelationships that allow us to fairly easily calculate the rise (i), the radius, the peak (P), and the center (C). A bit of algebra produces \(r = \frac{u (b^2+1)}{4b}\) and \(i = \frac{b u}{2}\), where \(b\) is the bulge value and \(u\), \(r\) and \(i\) are as in Fig. 2. Calculating \(u\) unfortunately requires a square root, but no trigonometric function calls. Finding C and P involves using \(r\) and \(i\) along with a vector perpendicular to line AB [e.g., \((x,y) \to (-y,x) \), but watch which one gets the negative depending on which way you're turning].<div class="MsoNormal"><!--[endif]--><sub><o:p></o:p></sub></div></li></ol><div>(See <a href="http://darrenirvine.blogspot.com/2015/08/polylines-radius-bulge-turnaround.html">Polylines: Radius-Bulge Turnaround</a> for follow up post to this one.)</div></div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-24919425708556677342014-08-22T23:30:00.001-05:002014-08-22T23:30:31.653-05:00SVG Display in .NET WebBroswer ControlThe following example illustrates how to get the WebBrowser control in .NET to display inline SVG files:<br /><br /><textarea cols="80" rows="11"><html><head><meta http-equiv="X-UA-Compatible" content="IE=9" /></head><body><svg height="200" width="200"> <circle cx="100" cy="100" r="98" stroke="black" stroke-width="3" fill="red" /></svg></body></html></textarea><br /><br />The above file will display correctly in the WebBrowser control. The meta tag is what you need.<br /><br />Thanks to Lukas on this under-acknowledged stackoverflow answer:<br /><a href="http://stackoverflow.com/questions/8852722/webbrowser-control-wpf-or-windows-forms-cant-display-svg-files-contained-in">http://stackoverflow.com/questions/8852722/webbrowser-control-wpf-or-windows-forms-cant-display-svg-files-contained-in</a>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-16255786068110861942014-08-22T23:19:00.002-05:002014-12-31T19:33:12.263-06:00iMaxima and Slime in EMACS<i style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">First of all, I am a Windows user. Lots of (probably) very helpful documentation for how to do this is not directly applicable to my setup, because I am not a Linux user. Bummer. That being said, it worked out in the end.</i><br /><br style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;" /><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">If you just want to hurry up and try Common Lisp in a REPL, then try </span><a href="http://common-lisp.net/project/lispbox/" style="background-color: white; color: #888888; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px; text-decoration: none;">Lispbox</a><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">.</span><br /><br style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;" /><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">I recently set out to do two things:</span><br /><ol style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><li style="margin: 0px 0px 0.25em; padding: 0px;">Get EMACS to run Slime (with <i>any </i>Common Lisp implementation)</li><li style="margin: 0px 0px 0.25em; padding: 0px;">Get EMACS to run iMaxima</li></ol><b style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><span style="font-size: large;">Slime</span></b><br /><br style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;" /><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">Watch </span><a href="http://www.youtube.com/watch?v=VnWVu8VVDbI" style="background-color: white; color: #888888; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px; text-decoration: none;">Installing Common Lisp, Emacs, Slime & Quicklisp</a><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">: It's a demo of installing EMACS with SBCL, Quicklisp, and Slime.</span><br /><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><ul style="line-height: 1.4; margin: 0.5em 0px; padding: 0px 2.5em;"><li style="margin: 0px 0px 0.25em; padding: 0px;">The primary hangup I was having until I saw this video is that I was trying to find a way to get EMACS to do something that I was supposed to do from within a running instance of a selected Common Lisp implementation. In other words, if you want to use CCL in EMACS using Slime, you should run CCL in a cmd window and use <i>that </i>REPL. Like many solutions, it's totally obvious when it is shown to you, which is why you may have trouble finding explicit statements about this.</li><li style="margin: 0px 0px 0.25em; padding: 0px;">If you want to see a few of the basic environment variable modifications in slow motion (as it were), see <a href="http://sachachua.com/blog/2012/06/making-gnu-emacs-play-well-on-microsoft-windows-7/" style="color: #888888; text-decoration: none;">here</a>.</li></ul><div>To start slime, I launch EMACS and then type ALT-X slime ENTER.</div><div><br /></div><span style="font-size: large;"><b>iMaxima</b></span></div><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><br />Visit the main site for <a href="https://sites.google.com/site/imaximaimath/Home" style="color: #888888; text-decoration: none;">iMaxima</a> to see what it's about. It also has a section for installation instructions. Most of what you need to know is on that website. Below are details from my own setup and additional research that helped me get the system up and running on my computer.<br /><br />After you have EMACS setup and have decided your HOME folder (see <a href="http://sachachua.com/blog/2012/06/making-gnu-emacs-play-well-on-microsoft-windows-7/" style="color: #888888; text-decoration: none;">here</a>), run EMACS to let it create a ".emacs.d" folder. Then create or edit the file init.el (el means EMACS Lisp, I believe). Here is my entire init.el (largely excepting comments), which includes commands for a REPL with SBCL and iMaxima (separately):</div><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><br /></div><textarea cols="80" rows="12" wrap="off">(load (expand-file-name "~/quicklisp/slime-helper.el")) ;; Replace "sbcl" with the path to your implementation (setq inferior-lisp-program "sbcl") (add-to-list 'load-path "C:/Program Files (x86)/Maxima-5.31.2/share/maxima/5.31.2/emacs") (load "setup-imaxima-imath.el") (autoload 'maxima-mode "maxima" "Maxima mode" t) (autoload 'imaxima "imaxima" "Frontend for maxima with Image support" t) (autoload 'maxima "maxima" "Maxima interaction" t) (autoload 'imath-mode "imath" "Imath mode for math formula input" t) (setq imaxima-use-maxima-mode-flag t) (add-to-list 'auto-mode-alist '("\\.ma[cx]" . maxima-mode))</textarea><br /><br style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;" /><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.5599994659424px;">Size of TeX output can be modified using (setq imaxima-fnt-size "xxxx"), where xxxx is one of small, normalsize, large, Large, LARGE, huge, or Huge. Also, by viewing the imaxima.el file, you can see a number of other options that are available to you.</span><br /><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><br /></span><span style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">To understand what's happening a bit better, it helps to know a few variables.</span><br /><ol style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><li style="margin: 0px 0px 0.25em; padding: 0px;">Go to your scratch buffer in EMACS.</li><li style="margin: 0px 0px 0.25em; padding: 0px;">Type inferior-lisp-program CTRL-j.</li><li style="margin: 0px 0px 0.25em; padding: 0px;">You will get the output "sbcl" if you used my init.el file.</li></ol><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">I suggest the scratch buffer because it is important in debugging when you don't have a working setup of anything else. The scratch buffer is available. It runs ELISP, EMACS' own Lisp dialect.</div><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><br /></div><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;">Here are a few other variables you might benefit from looking at:</div><div style="background-color: white; color: #222222; font-family: Calibri; font-size: 15px; line-height: 21.559999465942383px;"><ul style="line-height: 1.4; margin: 0.5em 0px; padding: 0px 2.5em;"><li style="margin: 0px 0px 0.25em; padding: 0px;">load-path</li><li style="margin: 0px 0px 0.25em; padding: 0px;">exec-path</li><li style="margin: 0px 0px 0.25em; padding: 0px;"><b>image-library-alist</b></li></ul><div>I've bolded image-library-alist because it is really helpful to know what file you need to display png files. I snagged the files jpeg62.dll, libpng14-14.dll, and zlib1.dll from a couple of sources and placed them in "C:\Program Files (x86)\emacs-24.3\bin" which is where my "runemacs.exe" is. </div><div><ul style="line-height: 1.4; margin: 0.5em 0px; padding: 0px 2.5em;"><li style="margin: 0px 0px 0.25em; padding: 0px;">Notably, I got libpng14-14.dll and zlib1.dll from "C:\Program Files (x86)\Inkscape". (You really should try Inkscape, the free SVG editor, anyways.)</li><li style="margin: 0px 0px 0.25em; padding: 0px;">If your image-library-alist has other files in it, you either need to augment your image-library-alist (probably in your init.el file) or find one of the files already in image-library-alist and place it in the load path. You only need to do this for png files (as far as I know) in order to use imaxima.</li></ul><div>A command you should try running (still in the scratch buffer) is</div><div><br /></div><div><span style="font-family: 'Courier New', Courier, monospace;"> (image-type-available-p 'png)</span></div><div><br /></div><div>which will return T or NIL depending on whether the file type is available or not, respectively. If you get NIL, it means you don't have one of the files in image-library-alist in your load-path (or perhaps a dependee file is missing?).</div><div><br /></div><div style="background: rgb(211, 211, 211); border: 1px solid black; padding: 5px;"><b><span style="font-size: medium;">Breqn</span></b><br /><br />I had a lot of trouble with the display of the LaTeX even after I had a running imaxima session in EMACS. (Results of evaluations are usually displayed with LaTeX.) The problem stemmed from a missing package: breqn. I installed the mh package from within the MiKTeX Package Manger and it appeared to install fine, but my LaTeX was still not working properly within an imaxima session. The issue appears to have been related to dynamic loading of packages.<br /><br />Here's what finally shook the problem loose: within TeXnicCenter I started a new project and included the line<br /><br /><span style="font-family: 'Courier New', Courier, monospace;"> \usepackage{breqn}</span><br /><br />along with the other <span style="font-family: 'Courier New', Courier, monospace;">\usepackage </span>declarations that were part of that project. When I built the project I was prompted for the installation of the package and the installation appeared to go smoothly. Then I tried imaxima again and the LaTeX output worked.</div><br /><div>To start an imaxima session, launch EMACS and then type ALT-x imaxima ENTER.<br /><br />Piece of cake! :)<br /><br /><div><b><span style="font-size: large;">Additional Info (probably not necessary)</span></b><br /><div><br /></div><div><div>I'm still getting a warning message when I load imaxima, although it has yet to prove to be a problem. The warning is:<br /> <span style="font-family: 'Courier New', Courier, monospace;">(lambda (x) ...) quoted with ' rather than with #'</span></div><div></div><br />I also made another change, and I don't know if it was necessary or not (hopefully not), but while wrestling with something quite different I changed line 14 of maxima-build.lisp (that's "C:\Program Files (x86)\Maxima-5.31.2\share\maxima\5.31.2\src\maxima-build.lisp" on my system). I changed that line to:</div></div><div><br /></div><div><span style="font-family: 'Courier New', Courier, monospace; font-size: xx-small;">(load "C:/Program Files (x86)/Maxima-5.31.2/share/maxima/5.31.2/share/lisp-utils/defsystem.lisp")</span></div><div><br /></div><div>This is just an explicit file reference rather than a relative path. (In case you're wondering, I was trying, unsuccessfully, to compile maxima myself. Perhaps another time.)</div></div></div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-59443776333169746992014-05-10T00:51:00.002-05:002014-05-10T10:11:12.904-05:00Skew Cut ProfilesI recently was surprised (anew) at what happens when you cut a prism which has a uniform cross-section with a plane which is not orthogonal to the length of the prism. Especially if the plane must be described in terms of two angles. A cross-section through a square prism is (by definition) a square. If you make your cut at an angle, you will get a rectangle if you have a simple angle cut. But if you make your cut not only a miter, but also a bevel you end up with a parallelogram.<br /><br />The simplest way to visualize this is to imagine a plane that intercepts the prism at a single edge first and has its steepest slope in the direction of the opposite edge:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-_rdvIqSsIHU/U22w8mPsMiI/AAAAAAAAD08/A18dxfmB9yQ/s1600/Capture3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-_rdvIqSsIHU/U22w8mPsMiI/AAAAAAAAD08/A18dxfmB9yQ/s1600/Capture3.PNG" height="247" width="320" /></a></div>The resulting section when viewed in the intersecting plane would be something like this:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-v6jx3QezAEw/U22x8vp3MyI/AAAAAAAAD1E/o8fo2F0ngLc/s1600/Capture4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-v6jx3QezAEw/U22x8vp3MyI/AAAAAAAAD1E/o8fo2F0ngLc/s1600/Capture4.PNG" height="320" width="219" /></a></div><br />where the blue square is the cross-section, the red parallelogram is the intersection of the above described plane through the square prism when viewed in that plane. (A few thought experiments of your own will probably serve you better than any further illustration of this phenomenon I could drum up at the moment.)<br /><br />It is interesting to see the result in an I-beam shape if you use a 45° miter and 45° bevel:<br /><br /><a href="http://4.bp.blogspot.com/-YIJI42--MvM/U224k2TbAVI/AAAAAAAAD1U/CVExEWit44c/s1600/Capture5.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" src="http://4.bp.blogspot.com/-YIJI42--MvM/U224k2TbAVI/AAAAAAAAD1U/CVExEWit44c/s1600/Capture5.PNG" /></a><br /><br />As a matter of where this observation fits into mathematics generally, this is a projective or affine transformation. Such transformations preserve straight lines but they do not, in general, preserve right angles.<br /><br /><b>Aside: Convex Hulls</b><br /><br />The fact that they preserve straight lines is a boon to the would-be-writer of a convex hull algorithm in an arbitrary plane since you can project the 3D coordinates onto a simple 2D plane and use the 2D algorithm (being careful to preserve or restore the other coordinate at the end). If you do so, you should probably choose which set of 2 coordinates to use for the algorithm, based on the direction of the plane. For example, if the normal of the plane has a larger z-coordinate than x- or y-coordinates, then the (x,y) pairs are best to use for the information to run the 2D convex hull algorithm on. There are two reasons to be picky about this:<br /><ol><li>If the points fed to the routine are in thy yz-plane, you will get incorrect results (if you assumed the (x,y) coordinates should always be used).</li><li>It may reduce the effect of rounding errors in the calculations (in terms of deciding whether points "near the line" of the hull are inside or part of the hull).</li></ol><div><b>Code</b><br /><br />The maxima code that I used to produce two of the illustrations in this post can be found here:<br /><ol><li><a href="http://www.drivehq.com/file/df.aspx/publish/dsisammy/hemi/dxfert1.lisp">DXFert1.lisp</a>: contains some lisp routines for writing DXF files. Very basic text output for producing a bunch of DXF lines. Used by 2.</li><li><a href="http://www.drivehq.com/file/df.aspx/publish/dsisammy/hemi/Double-Skewed%20Cross-section.wxm">Double-Skewed Cross-section.wxm</a>: uses matrix multiplication to do rotations, uses the solve routine on simple linear equations, defines an I-beam cross-section from basic metrics, etc. For DXF file output, name the output file appropriately (use a full path name with fore slashes (/) to be sure it goes where you want, unless you know your maxima setup well) and identify the path to DXFert1.lisp.</li></ol><div>The code files are free to use on an "as is" basis. </div><div><br /></div><div>See the <a href="http://maxima.sourceforge.net/">Maxima project</a> for download information for Maxima.</div></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-10666259555429369222014-05-03T15:19:00.002-05:002014-05-03T15:19:56.355-05:00What Do You Mean, "Equal"?Mathematics makes use of the terms equal, equivalent, similar and congruent quite frequently—we could stretch ourselves out a little further and include analogous. They all have something in common: they refer to some kind of "sameness." But they are not interchangeable.<br /><br />By similar, when speaking of shapes in basic geometry, we mean that all of the angles between adjacent lines are the same between the two shapes being considered and those angles can be construed as being in the same order (say that three times fast). We have to sharpen our pencils to deal with curved shapes, but we can in fact speak precisely about similarity between curves. We might say, A is similar to B if and only if there is some function <i>f</i> that can be applied to A to turn it into B which is (strictly) a composition of translations, rotations, and (universal) scaling.<br /><br />By congruent, when speaking of shapes in geometry, we are referring to similarity, but nix the scaling. Scaling not allowed. This is a more restrictive requirement. Two things might be similar, but not congruent.<br /><br />The word "equivalent" is often used in the context of logic. (Logic, by the way, is what many math geeks <i>really </i>like about math—<i>not</i> mainly numbers! Mind you, it is generally hard to argue with the "logic" of numbers when they have been handled rightly. But numbers aren't really useful without logic being applied to their interpretation.) Two propositions are equivalent to each other if they have the same "truth table". When we say propositions P and Q are equivalent we mean that whenever P is true, Q is true, when Q is true, P is true, when P is false Q is false, when Q is false, P is false.<br /><br />Equivalence is defined for other applications as well. Equivalence classes are an overarching concept that can be applied in describing similarity, congruence, and various types of equality. What constitutes equivalent has to be spelled out either by context, convention, or explicit definition for clear communication to occur.<br /><br />Suppose I said triangle A is equal to triangle B? If you've taken a bit of geometry, you know that you are supposed to frown when people say rubbish like that.<br /><ul><li>Do you mean the triangles have equal area?</li><li>Do you mean the triangles have equal perimeter?</li><li>Do you mean the angles are all equal? (similar)</li><li>Do you mean the corresponding angles and sides are equal? (congruent)</li></ul><div>When we talk about equality in whatever form, we should take some thought to what we mean. In well established fields, this is communicated by using the relevant term which has been defined in that field for the "kind of sameness" you wish to refer to. However, when the field is not so well established—or if the audience might not have the same understanding of the meaning, it's best to be explicit.<br /><br /><b><span style="font-size: x-large;">Application</span></b><br /><br />What, for example, does it mean that "all men are created equal"? I do not intend to discuss the matter at length (as I do not pretend to have a complete answer) but only to point out that our discussions and claims of the properness of equality amongst human beings are fraught with the difficulty of differing understandings of what "equal" and "equality" refer to in such a discussion. For the time being I will content myself with one great, confusing ideal prevalent in my culture (western): meritocracy. (I mean the term in a loose sense—there probably is no such a thing, in reality, on planet earth.)<br /><br />You may well be shocked at my so (apparently) deriding such a principle. To judge by merits is, in many circumstances, to be contrasted with partiality or prejudice. I heartily recommend the phrase "all men are create equal", but on what basis? Surely merit will not undergird this recommendation. Merit is a basis for distinguishing, which I also recommend!<br /><br />Perhaps you do not believe that the two issues (merit vs inherent equality) are commonly confounded. I am satisfied that they are. When I hear the merits of representatives of one group (demarcated by ethnicity, gender, or whatever) touted for the vindication of that group, I normally listen quietly, but I do not need such things. Nor do I have much appreciation for the comparing of statistical averages of various metrics in an effort to prove equality. I realize such things are sometimes done in an effort to silence and defend against bigotry, and I do not deride that intent. But I have also seen the defended group touted implicitly as superior (better at this, better at that), in which case I take it as a betrayal of a pretense—appearing to seek equality but with dominance of some kind as the true objective. I do not see inherent equality as based on merit. My key word here, as you may guess, is "inherent". There may be value in recognizing differences, but merit is the "poor man's" rubric for deciding the matter of inherent equality.<br /><br />Whatever you think of these things, you'll have to deal in your own mind with what you mean by "equal" and what your basis for it is. But I'll tell you where I start:<br /><blockquote class="tr_bq">So God created man in his <i>own</i> image, in the image of God created he him; male and female created he them. (Genesis 1:27)</blockquote></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0tag:blogger.com,1999:blog-2856934212808231920.post-81176670543598101252014-04-05T13:46:00.001-05:002015-01-10T17:40:21.505-06:00Best Fit Circle: 3D considerationsSomebody recently searched for a 3D best fit circle and I realized there may be some special considerations if your circle is on a slope. There would also be some tweaks needed if you wanted a best fit sphere instead.<br /><br /><h3>Sphere</h3>First, I should mention, it's possible that the searcher was really looking for a sphere because I have observed that many people don't remember specific terms and use an analogy from a 2D object that they do know the name of and prefix it with "3D" (there, all better now; Darren: *rolls eyes*). I'm not one to criticize analogies, though I sometimes wonder if people realize they are making one. (If you tell me you're making an analogy, I promise not to roll my eyes. But tell me up front, so I can catch myself.)<br /><br />Anyways, if it's a sphere you want a best fit of, the equations for the sum of the square of the error (SSE) and radius (r) become:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-ea2sRvlJ8Ds/U0A8TUw33WI/AAAAAAAAD0M/JO4tVMTb310/s1600/sse.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-ea2sRvlJ8Ds/U0A8TUw33WI/AAAAAAAAD0M/JO4tVMTb310/s1600/sse.PNG" /></a></div><div class="separator" style="clear: both; text-align: left;">and</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-uRmcbxwMK9E/U0A8TRYiCfI/AAAAAAAAD0Q/BUacZkE3Bso/s1600/rad.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-uRmcbxwMK9E/U0A8TRYiCfI/AAAAAAAAD0Q/BUacZkE3Bso/s1600/rad.PNG" /></a></div><br /><div class="separator" style="clear: both; text-align: left;">However, if you really want a circle on a sloped plane, I don't recommend using the spherical treatment as a "cheat". I recommend the following method.</div><br /><h3>Circle on a Sloped Plane</h3>Before you get carried away on this method, you need to decide if this is really what you need. If you want a shape which will make a circle in plan view (top view), then you need to project your points onto that plane (the xy-plane). This is the same as ignoring the elevation (z-axis). In practice, I'd guess this is really what most people would want. It's true that it will actually be an ellipse when viewed in the plane of the sloped surface; albeit, on only slightly sloping terrain, the difference between that ellipse and a circle probably does not affect the price of rice in China (as we say). <b>If you think the <i>plan view</i> should show a circle, just lop off the z-coordinate and use a plane old 2D method—that is what you actually want.</b><br /><br />But if the slope is considerable and you are sure that you want a best fit circle in the plane of that sloped (and basically planar) surface, there are some steps you can take to obtain transformed data to feed into a regular planar method.<br /><ol><li>Find the best fit plane for your points.</li><li>Determine a transform matrix (T) to rotate your plane into the xy-plane as well as its inverse (T<sup>–1</sup>).</li><ul><li>You may use the normal vector of the best fit plane to determine this by using two rotation matrix transforms in composition.</li><li>You will also need to watch for translation. The plane, prior to rotation, should intersect the origin to work properly. It may be appropriate to use an augment (4x4 matrix) to incorporate translation into the transform as well—this way the inverse will also apply the translation as well, rather than having a separate step (only one point though, so no biggie, time-wise).</li></ul><li>Apply the transform T to your points to obtain points in the xy-plane.</li><ul><li>The two rotation matrices can be premultiplied so that you only have to carry out the multiplication between a single transform matrix and the point data. (Recall that matrix multiplication is associative.)</li></ul><li>Ignore the z-coordinates of the transformed points. Set them to zero or produce (x,y) data by the trivial xy projection [(x,y,z) => (x,y)] <i>only </i>if this is expected by your 2D best fit circle function or method.</li><li>Apply 2D methods to obtain a best fit circle on the xy-plane data. See <a href="http://darrenirvine.blogspot.com/2012/02/best-fit-circle-find-center-using-excel.html">here for how to set this up in Excel</a>.</li><li>Apply the inverse transform (T<sup>–1</sup>) to your resulting best fit center to put it into the best fit plane.</li></ol><div style="background-color: lightgrey; border: 1px solid black; padding: 5px;"><i>Tip</i>: Be watchful of whether your rotational transforms are assuming column vectors or row vectors. Many textbooks give column vector representations, but some programs may assume a row vector. In the case of Maxima, it will assume a row vector if the vector is given first and a column vector if it is given last: <br /><br /><span style="font-family: Courier New, Courier, monospace;">m1: matrix([c1,-s1,0],[s1,c1,0],[0,0,1]);</span><br /><span style="font-family: Courier New, Courier, monospace;">r: [a,b,c];</span><br /><span style="font-family: Courier New, Courier, monospace;">m1.r; /* column vector representation */</span><br /><span style="font-family: Courier New, Courier, monospace;">r.transpose(m1); /* row vector representation */</span><br /><br /><span style="font-family: Courier New, Courier, monospace;">(%o1) matrix([c1,-s1,0],[s1,c1,0],[0,0,1])</span><br /><span style="font-family: Courier New, Courier, monospace;">(%o2) [a,b,c]</span><br /><span style="font-family: Courier New, Courier, monospace;">(%o3) matrix([a*c1-b*s1],[a*s1+b*c1],[c])</span><br /><span style="font-family: Courier New, Courier, monospace;">(%o4) matrix([a*c1-b*s1,a*s1+b*c1,c])</span></div>Darren Irvinehttps://plus.google.com/104367969098641629458noreply@blogger.com0