Saturday, December 13, 2014

What is the Sine of 18°?

To find the exact values of sine (and cosine) of a given angle, you need to use identities. This particular value is interesting in that it takes a fair bit of work to arrive at this value analytically, and yet the value is expressible as a fairly simple radical. The first thing to notice is that 90 = 5(18). You probably know the half angle identity, but not a fifth angle identity.  And for good reason—the fifth angle identity has multiple solutions; it's a fifth degree polynomial.

I did this calculation manually the first time through—some years ago. It goes a lot easier with a computer algebra system. Today, I'm going to use Maxima to solve the problem.

An identity which comes up in the development of the complex numbers, gives a convenient way of expressing \(\sin {n\theta}\) in terms of combinations of \(\sin \theta\) and \(\cos \theta\).

De Moivre's formula:
\[r^n (\cos \theta + i \sin \theta)^n = r^n (\cos {n\theta} + i \sin {n\theta}).\]
So, if we take \(r=1\) and \(n=5\), we can do a binomial expansion and easily isolate the \(sin \theta\) and \(\cos \theta\). In a Maxima cell, I type

eq: (cos(%theta) + %i*sin(%theta))^5 = cos(5*%theta) + %i*sin(5*%theta);

After executing, I obtain the fifth angle identities (if you want to call them that).
\[\mathrm{cos}\left( 5\,\theta\right) =5\,\mathrm{cos}\left( \theta\right) \,{\mathrm{sin}\left( \theta\right) }^{4}-10\,{\mathrm{cos}\left( \theta\right) }^{3}\,{\mathrm{sin}\left( \theta\right) }^{2}+{\mathrm{cos}\left( \theta\right) }^{5}\]
\[\mathrm{sin}\left( 5\,\theta\right) ={\mathrm{sin}\left( \theta\right) }^{5}-10\,{\mathrm{cos}\left( \theta\right) }^{2}\,{\mathrm{sin}\left( \theta\right) }^{3}+5\,{\mathrm{cos}\left( \theta\right) }^{4}\,\mathrm{sin}\left( \theta\right) \]
To eliminate \(\cos^2 \theta\) from the sine identity we use


and obtain
\[\mathrm{sin}\left( 5\,\theta\right) =16\,{\mathrm{sin}\left( \theta\right) }^{5}-20\,{\mathrm{sin}\left( \theta\right) }^{3}+5\,\mathrm{sin}\left( \theta\right). \]
Taking \(\theta=18°\), using \(x\) to stand in for \(\sin 18°\), and rearranging, we have
\[0 = 16\,{x}^{5}-20\,{x}^{3}+5\,x - 1.\]
Factoring this one manually was made easier by recognizing \(x = 1\) as a root. After that I tried forming a square free polynomial and found that there was in fact a squared polynomial involved. In general, this helps in a couple ways (potentially):
  1. Reduces the degree of the polynomial to be factored.
  2. Sometimes produces fully reduced factors as a by-product of forming the square free polynomial.
    1. This happens when each repeated factor is of different degree; that is, occurs a different number of times.
(Square free means no factors left that are raised to some power—each factor occurs exactly once.) Doing this manually involves taking the greatest common divisor between a polynomial and it's derivative. But today, let's let Maxima do it.

factor(0=16*x^5-20*x^3+5*x - 1);
\[0=\left( x-1\right) \,{\left( 4\,{x}^{2}+2\,x-1\right) }^{2}.\]
At this point, you really do have to be exceedingly lazy to not continue with the rest of the work manually. It's just a quadratic formula to be applied now (we discard \(x=1\) as an artifact). To use a computer algebra system for the rest is...well...okay, it is Saturday, so...

We reject the negative solution (which is \(\sin {-54°}\) and, interestingly, is the negative of half the Fibonacci number) on the grounds that the range of the sine function on our interval is positive. Hence,
\[\sin 18° = \frac {\sqrt{5} - 1}{4}.\]

Wednesday, November 12, 2014

The Man, the Boy, and the Donkey

A criticism for every alternative. And praise for none.

I suppose the story of the man, the boy, and the donkey comes up in everybody's life, and more frequently than they like. It is as timeless as the complaining nature of humanity. When you're looking at the alternatives, and you don't like any of them, it's something you think about. I think about it even more when the option I like best is probably the more expensive option—at least, in the short run.

This is an issue that comes up in software. I'm thinking, in particular, of re-factoring. Re-factoring feels good. It makes your code base cleaner, more maintainable, and makes further work on a given project easier. But it may not produce the immediate, visible results that may be expected of you. I sometimes face the choice between boiler plate code and re-factoring. I prefer re-factoring. It appeals to the abstract intellect. Make something hard, easier. But honestly, if I just need to get project Y done, sometimes it's faster to copy project X and tweak a few things.

And both choices can be criticized:
  • "Why do you have this repetitive code?" (Sometimes saves time in the short run.)
  • "Why did you waste all that time tweaking code that already works?" (To make future modification and additions, or even entire projects, easier.)
Now, how was the repetitive code born? You solved problem X for a specific aspect of your program. There was only one and/or you were feeling your way through the problem. Perhaps a challenging problem, uncharted territory, documentation problems. A lot of experimentation. And you're not yet clear on what the best way to break it all down is.

But now you need to solve problem Xa. Similar, but different enough that the same code won't do the job. So, you grab the code for problem X and tweak it. Done. Move on.

Sooner or later, perhaps you need to solve problem Xb. Same song, same verse, a little bit louder and a little bit worse. Now you wish you re-factored earlier. On the other hand, now you have three problems that are similar and you have material to work with to abstract into the kinds of patterns you need to see in order to re-factor. You know that problem X, Xa, and Xb are similar in an intuitive way, but maybe you needed to see them up close and personal to compare them and decide how to generalize so as to make the most useful subroutines for future work. And seeing as you have X and Xa working, a code re-factor on these areas of code could make a pretty good testing ground for the new subroutines. Hopefully, you can introduce one main modification at a time to these and be fairly confident, that if something's not working, the problem is with the newly introduced (re-factored) code. You already have (in measure) isolation of the (potential) problem area. (Isolation, the troubleshooter's creed.)

So, having re-factored X and Xa with Xb in mind, you're well on your way to having Xb solved. But we don't think that way under deadline scenarios or when we're fixated on some incentive. Abstracting, generalizing, fine-tuning, posing hypotheses and testing them. These are creative processes. And they don't directly produce the end result to solve Xb. And it might be faster to boiler plate it. But if you invest in a re-factor now, it is likely to improve your delivery time on Xc, Xd, etc.

I suppose, knowing when to sharpen the ax is a judgement call.

One of the strategies that helps me get well factored code earlier in the game is a REPL (Read Execute Print Loop). Now that I know what they are, I sorely miss them when they are not available for my project. With a REPL, you would approach the solving of problem X with a view to making it easy to solve that particular problem. If you can think of generalizations right then and there, so much the better. But rather than having the whole program together and testing it in one big mess, you can fairly easily test little bits and pieces of the program and get them working. This helps you "feel" through a problem and see, more easily, how to break the problem up into pieces. It gives you quick turnaround on testing ideas.

It doesn't guarantee you'll land on something that will make solving Xa easier (perhaps you don't even know about problem Xa yet), but it improves your odds. And just maybe you'll more easily remember and understand that great solution you came up with for problem X and have an easier time making appropriate modifications. (Sometimes reading old code is like reading old notes that aren't as clear as you thought they were when wrote them.)

A REPL helps you feel your way through a problem, do experiments (dealing with uncertainty), test components separately (isolation), and produce modular solutions.

From a project management perspective it is:
  • A planning tool
  • A risk management tool
  • A monitoring and controlling tool
  • Not to mention, an execution tool
REPLs are common place for F# and Common Lisp.

Saturday, October 4, 2014

Merging Uniquishly in Common Lisp

I've been working on some code with dxf files and I needed to have some generalized merging. In particular, for my first function, I wanted to merge headers without creating duplicate entries. My reader sorts the header values, so when I go to merge I just need to have a simple merge that looks at each of two lists and collects the lowest value and only advances the list that had an element "consumed" (as in, I collected it). If the values are equal, it should collect one instance, and consume from both lists (i.e., advance—take the cdr of—both lists).

The approach I've taken does not guarantee a list of unique items, only that if the original lists are sorted and contain no duplicates, then the resulting list will be sorted and contain no duplicates.  It is also non-destructive, returning a new list

(Confession: Actually, I did it the hard way the first time. Custom code that can only do one thing. Now I'm going back and amending my ways as an ardent Lisper.)

I went with a loop macro for one reason: the collect key word. This is one of the things that keeps me coming back to the loop macro. I like it overall, but when I'm struggling with the loop macro, I sometimes want to use the do* macro.  But I dislike the way do* lends itself to consing and a final call to reverse.  Hence, I come back whimpering to the loop macro.

The code:

Here's some sample output from an EMACS session:

The custom code solution was 23 lines and not so pleasant looking. The tri-test follows the custom of using a negative value to indicate a < b, positive for a > b, and 0 for a = b.  For numeric data, subtraction is the natural choice for a tri-test.

I tried using some for keywords inside of if statements and my horse (compiler) wouldn't make the jump.  But that's okay, I used a for statement at the beginning to initialize some variables (nexta and nextb) which I could then setf within do clauses as necessary.

The more I use the loop macro, the more I like it. Eventually, maybe I will discover all of the things that cause it to choke, stop doing those things, and then I will like it even more.

Saturday, September 13, 2014

Bulge Value (code 42) in LWPolyline

Polylines are a pretty handy way to represent and manipulate outlines of objects in CAD software. However, the very robust polyline entity can do so many things that it is somewhat complicated. The light weight polyline (LWPolyline) entity is a trimmed down version of polylines that does the most frequently needed things fairly compactly and straight-fowardly.

LWPolylines are strictly 2D objects. Of course, they can be created in a UCS other than the WCS but they will be planar, and the points stored in them will be 2D points relative to their object coordinate system (OCS—I haven't really worked with OCSs so I'm not much help other than to tell you to learn the Arbitrary Axis Algorithm described in the free DXF Reference from Autodesk, which I've never actually learned properly).

LWPolylines still have the ability to represent arcs. Actually, I don't know how these are represented in Polylines, but in LWPolylines, arcs are fairly simple. All you need is the bulge value (code 42) which is tan(θ/4), where θ is the central angle.
Fig.1. θ is the central angle.
Which raises an interesting question: why for do we need the tangent of a quarter of the central angle?
Fig. 2. If the arc above was drawn from A to B, the bulge value would be negative (CW).
One reason is that it is compact. Other than storing points A and B, which you already need, the only additional information to unambiguously denote the arc from A to B is the bulge value. If the radius was stored instead, we would have up to 4 different arcs that could be being referred to. The SVG representation requires you to enter two parameters after the radius that clarify which of the four arcs you want. SVG is trying to be human reader friendly, but in DXF, we're not as worried about that. I find it easier to deal with the bulge as far as computations are concerned.
Fig. 3. Four possible arcs of a given radius drawn from A to B. Which one do you want?
I imagine the choice of storing the tangent of the angle is computation speed. The choice of a quarter of the angle is probably for two reasons:
  1. Tangent has a period of 180°. Now, if you used the tangent of half the angle subtended, this might appear initially to be sufficient, except that you would not be able to use the sign of the value to decide between CW and CCW since you would need the sign to decide between less than or greater than 180° arcs. You would also have an asymptote get in the way of drawing semi-circles. By storing the tangent of a quarter of the arc subtended, we restrict the the domain to angles which yield a positive tangent and so the sign value can be used for something else. It also avoids data representation limitations until you get close to having a complete circle (at which point, the user should probably consider using a circle). I don't know what the solution technically would be for storing a full circle other than a sentinel value or just not letting the user do that—don't know what happens.
  2. The diagram above shows some handy interrelationships that allow us to fairly easily calculate the rise (i), the radius, the peak (P), and the center (C). A bit of algebra produces \(r = \frac{u (b^2+1)}{4b}\) and \(i = \frac{b u}{2}\), where \(b\) is the bulge value and \(u\), \(r\) and \(i\) are as in Fig. 2. Calculating \(u\) unfortunately requires a square root, but no trigonometric function calls. Finding C and P involves using \(r\) and \(i\) along with a vector perpendicular to line AB [e.g., \((x,y) \to (-y,x) \), but watch which one gets the negative depending on which way you're turning].
(See Polylines: Radius-Bulge Turnaround for follow up post to this one.)

Friday, August 22, 2014

SVG Display in .NET WebBroswer Control

The following example illustrates how to get the WebBrowser control in .NET to display inline SVG files:

The above file will display correctly in the WebBrowser control.  The meta tag is what you need.

Thanks to Lukas on this under-acknowledged stackoverflow answer:

iMaxima and Slime in EMACS

First of all, I am a Windows user. Lots of (probably) very helpful documentation for how to do this is not directly applicable to my setup, because I am not a Linux user. Bummer. That being said, it worked out in the end.

If you just want to hurry up and try Common Lisp in a REPL, then try Lispbox.

I recently set out to do two things:
  1. Get EMACS to run Slime (with any Common Lisp implementation)
  2. Get EMACS to run iMaxima

Watch Installing Common Lisp, Emacs, Slime & Quicklisp: It's a demo of installing EMACS with SBCL, Quicklisp, and Slime.
  • The primary hangup I was having until I saw this video is that I was trying to find a way to get EMACS to do something that I was supposed to do from within a running instance of a selected Common Lisp implementation. In other words, if you want to use CCL in EMACS using Slime, you should run CCL in a cmd window and use that REPL. Like many solutions, it's totally obvious when it is shown to you, which is why you may have trouble finding explicit statements about this.
  • If you want to see a few of the basic environment variable modifications in slow motion (as it were), see here.
To start slime, I launch EMACS and then type ALT-X slime ENTER.


Visit the main site for iMaxima to see what it's about. It also has a section for installation instructions. Most of what you need to know is on that website. Below are details from my own setup and additional research that helped me get the system up and running on my computer.

After you have EMACS setup and have decided your HOME folder (see here), run EMACS to let it create a ".emacs.d" folder. Then create or edit the file init.el (el means EMACS Lisp, I believe). Here is my entire init.el (largely excepting comments), which includes commands for a REPL with SBCL and iMaxima (separately):

Size of TeX output can be modified using (setq imaxima-fnt-size "xxxx"), where xxxx is one of small, normalsize, large, Large, LARGE, huge, or Huge. Also, by viewing the imaxima.el file, you can see a number of other options that are available to you.

To understand what's happening a bit better, it helps to know a few variables.
  1. Go to your scratch buffer in EMACS.
  2. Type inferior-lisp-program CTRL-j.
  3. You will get the output "sbcl" if you used my init.el file.
I suggest the scratch buffer because it is important in debugging when you don't have a working setup of anything else. The scratch buffer is available. It runs ELISP, EMACS' own Lisp dialect.

Here are a few other variables you might benefit from looking at:
  • load-path
  • exec-path
  • image-library-alist
I've bolded image-library-alist because it is really helpful to know what file you need to display png files. I snagged the files jpeg62.dll, libpng14-14.dll, and zlib1.dll from a couple of sources and placed them in "C:\Program Files (x86)\emacs-24.3\bin" which is where my "runemacs.exe" is. 
  • Notably, I got libpng14-14.dll and zlib1.dll from "C:\Program Files (x86)\Inkscape". (You really should try Inkscape, the free SVG editor, anyways.)
  • If your image-library-alist has other files in it, you either need to augment your image-library-alist (probably in your init.el file) or find one of the files already in image-library-alist and place it in the load path. You only need to do this for png files (as far as I know) in order to use imaxima.
A command you should try running (still in the scratch buffer) is

    (image-type-available-p 'png)

which will return T or NIL depending on whether the file type is available or not, respectively. If you get NIL, it means you don't have one of the files in image-library-alist in your load-path (or perhaps a dependee file is missing?).


I had a lot of trouble with the display of the LaTeX even after I had a running imaxima session in EMACS. (Results of evaluations are usually displayed with LaTeX.) The problem stemmed from a missing package: breqn. I installed the mh package from within the MiKTeX Package Manger and it appeared to install fine, but my LaTeX was still not working properly within an imaxima session. The issue appears to have been related to dynamic loading of packages.

Here's what finally shook the problem loose: within TeXnicCenter I started a new project and included the line


along with the other \usepackage declarations that were part of that project. When I built the project I was prompted for the installation of the package and the installation appeared to go smoothly. Then I tried imaxima again and the LaTeX output worked.

To start an imaxima session, launch EMACS and then type ALT-x imaxima ENTER.

Piece of cake! :)

Additional Info (probably not necessary)

I'm still getting a warning message when I load imaxima, although it has yet to prove to be a problem. The warning is:
   (lambda (x) ...) quoted with ' rather than with #'

I also made another change, and I don't know if it was necessary or not (hopefully not), but while wrestling with something quite different I changed line 14 of maxima-build.lisp (that's "C:\Program Files (x86)\Maxima-5.31.2\share\maxima\5.31.2\src\maxima-build.lisp" on my system). I changed that line to:

(load "C:/Program Files (x86)/Maxima-5.31.2/share/maxima/5.31.2/share/lisp-utils/defsystem.lisp")

This is just an explicit file reference rather than a relative path. (In case you're wondering, I was trying, unsuccessfully, to compile maxima myself. Perhaps another time.)

Saturday, May 10, 2014

Skew Cut Profiles

I recently was surprised (anew) at what happens when you cut a prism which has a uniform cross-section with a plane which is not orthogonal to the length of the prism. Especially if the plane must be described in terms of two angles. A cross-section through a square prism is (by definition) a square. If you make your cut at an angle, you will get a rectangle if you have a simple angle cut. But if you make your cut not only a miter, but also a bevel you end up with a parallelogram.

The simplest way to visualize this is to imagine a plane that intercepts the prism at a single edge first and has its steepest slope in the direction of the opposite edge:
The resulting section when viewed in the intersecting plane would be something like this:

where the blue square is the cross-section, the red parallelogram is the intersection of the above described plane through the square prism when viewed in that plane. (A few thought experiments of your own will probably serve you better than any further illustration of this phenomenon I could drum up at the moment.)

It is interesting to see the result in an I-beam shape if you use a 45° miter and 45° bevel:

As a matter of where this observation fits into mathematics generally, this is a projective or affine transformation. Such transformations preserve straight lines but they do not, in general, preserve right angles.

Aside: Convex Hulls

The fact that they preserve straight lines is a boon to the would-be-writer of a convex hull algorithm in an arbitrary plane since you can project the 3D coordinates onto a simple 2D plane and use the 2D algorithm (being careful to preserve or restore the other coordinate at the end). If you do so, you should probably choose which set of 2 coordinates to use for the algorithm, based on the direction of the plane. For example, if the normal of the plane has a larger z-coordinate than x- or y-coordinates, then the (x,y) pairs are best to use for the information to run the 2D convex hull algorithm on. There are two reasons to be picky about this:
  1. If the points fed to the routine are in thy yz-plane, you will get incorrect results (if you assumed the (x,y) coordinates should always be used).
  2. It may reduce the effect of rounding errors in the calculations (in terms of deciding whether points "near the line" of the hull are inside or part of the hull).

The maxima code that I used to produce two of the illustrations in this post can be found here:
  1. DXFert1.lisp: contains some lisp routines for writing DXF files. Very basic text output for producing a bunch of DXF lines. Used by 2.
  2. Double-Skewed Cross-section.wxm: uses matrix multiplication to do rotations, uses the solve routine on simple linear equations, defines an I-beam cross-section from basic metrics, etc. For DXF file output, name the output file appropriately (use a full path name with fore slashes (/) to be sure it goes where you want, unless you know your maxima setup well) and identify the path to DXFert1.lisp.
The code files are free to use on an "as is" basis.  

See the Maxima project for download information for Maxima.

Saturday, May 3, 2014

What Do You Mean, "Equal"?

Mathematics makes use of the terms equal, equivalent, similar and congruent quite frequently—we could stretch ourselves out a little further and include analogous. They all have something in common: they refer to some kind of "sameness."  But they are not interchangeable.

By similar, when speaking of shapes in basic geometry, we mean that all of the angles between adjacent lines are the same between the two shapes being considered and those angles can be construed as being in the same order (say that three times fast).  We have to sharpen our pencils to deal with curved shapes, but we can in fact speak precisely about similarity between curves.  We might say, A is similar to B if and only if there is some function f that can be applied to A to turn it into B which is (strictly) a composition of translations, rotations, and (universal) scaling.

By congruent, when speaking of shapes in geometry, we are referring to similarity, but nix the scaling. Scaling not allowed. This is a more restrictive requirement. Two things might be similar, but not congruent.

The word "equivalent" is often used in the context of logic. (Logic, by the way, is what many math geeks really like about math—not mainly numbers! Mind you, it is generally hard to argue with the "logic" of numbers when they have been handled rightly. But numbers aren't really useful without logic being applied to their interpretation.) Two propositions are equivalent to each other if they have the same "truth table". When we say propositions P and Q are equivalent we mean that whenever P is true, Q is true, when Q is true, P is true, when P is false Q is false, when Q is false, P is false.

Equivalence is defined for other applications as well.  Equivalence classes are an overarching concept that can be applied in describing similarity, congruence, and various types of equality.  What constitutes equivalent has to be spelled out either by context, convention, or explicit definition for clear communication to occur.

Suppose I said triangle A is equal to triangle B? If you've taken a bit of geometry, you know that you are supposed to frown when people say rubbish like that.
  • Do you mean the triangles have equal area?
  • Do you mean the triangles have equal perimeter?
  • Do you mean the angles are all equal? (similar)
  • Do you mean the corresponding angles and sides are equal? (congruent)
When we talk about equality in whatever form, we should take some thought to what we mean. In well established fields, this is communicated by using the relevant term which has been defined in that field for the "kind of sameness" you wish to refer to. However, when the field is not so well established—or if the audience might not have the same understanding of the meaning, it's best to be explicit.


What, for example, does it mean that "all men are created equal"? I do not intend to discuss the matter at length (as I do not pretend to have a complete answer) but only to point out that our discussions and claims of the properness of equality amongst human beings are fraught with the difficulty of differing understandings of what "equal" and "equality" refer to in such a discussion. For the time being I will content myself with one great, confusing ideal prevalent in my culture (western): meritocracy. (I mean the term in a loose sense—there probably is no such a thing, in reality, on planet earth.)

You may well be shocked at my so (apparently) deriding such a principle. To judge by merits is, in many circumstances, to be contrasted with partiality or prejudice. I heartily recommend the phrase "all men are create equal", but on what basis? Surely merit will not undergird this recommendation. Merit is a basis for distinguishing, which I also recommend!

Perhaps you do not believe that the two issues (merit vs inherent equality) are commonly confounded. I am satisfied that they are. When I hear the merits of representatives of one group (demarcated by ethnicity, gender, or whatever) touted for the vindication of that group, I normally listen quietly, but I do not need such things. Nor do I have much appreciation for the comparing of statistical averages of various metrics in an effort to prove equality. I realize such things are sometimes done in an effort to silence and defend against bigotry, and I do not deride that intent. But I have also seen the defended group touted implicitly as superior (better at this, better at that), in which case I take it as a betrayal of a pretense—appearing to seek equality but with dominance of some kind as the true objective. I do not see inherent equality as based on merit. My key word here, as you may guess, is "inherent". There may be value in recognizing differences, but merit is the "poor man's" rubric for deciding the matter of inherent equality.

Whatever you think of these things, you'll have to deal in your own mind with what you mean by "equal" and what your basis for it is. But I'll tell you where I start:
So God created man in his own image, in the image of God created he him; male and female created he them. (Genesis 1:27)

Saturday, April 5, 2014

Best Fit Circle: 3D considerations

Somebody recently searched for a 3D best fit circle and I realized there may be some special considerations if your circle is on a slope.  There would also be some tweaks needed if you wanted a best fit sphere instead.


First, I should mention, it's possible that the searcher was really looking for a sphere because I have observed that many people don't remember specific terms and use an analogy from a 2D object that they do know the name of and prefix it with "3D" (there, all better now; Darren: *rolls eyes*). I'm not one to criticize analogies, though I sometimes wonder if people realize they are making one. (If you tell me you're making an analogy, I promise not to roll my eyes.  But tell me up front, so I can catch myself.)

Anyways, if it's a sphere you want a best fit of, the equations for the sum of the square of the error (SSE) and radius (r) become:

However, if you really want a circle on a sloped plane, I don't recommend using the spherical treatment as a "cheat".  I recommend the following method.

Circle on a Sloped Plane

Before you get carried away on this method, you need to decide if this is really what you need. If you want a shape which will make a circle in plan view (top view), then you need to project your points onto that plane (the xy-plane). This is the same as ignoring the elevation (z-axis). In practice, I'd guess this is really what most people would want. It's true that it will actually be an ellipse when viewed in the plane of the sloped surface; albeit, on only slightly sloping terrain, the difference between that ellipse and a circle probably does not affect the price of rice in China (as we say). If you think the plan view should show a circle, just lop off the z-coordinate and use a plane old 2D method—that is what you actually want.

But if the slope is considerable and you are sure that you want a best fit circle in the plane of that sloped (and basically planar) surface, there are some steps you can take to obtain transformed data to feed into a regular planar method.
  1. Find the best fit plane for your points.
  2. Determine a transform matrix (T) to rotate your plane into the xy-plane as well as its inverse (T–1).
    • You may use the normal vector of the best fit plane to determine this by using two rotation matrix transforms in composition.
    • You will also need to watch for translation. The plane, prior to rotation, should intersect the origin to work properly. It may be appropriate to use an augment (4x4 matrix) to incorporate translation into the transform as well—this way the inverse will also apply the translation as well, rather than having a separate step (only one point though, so no biggie, time-wise).
  3. Apply the transform T to your points to obtain points in the xy-plane.
    • The two rotation matrices can be premultiplied so that you only have to carry out the multiplication between a single transform matrix and the point data. (Recall that matrix multiplication is associative.)
  4. Ignore the z-coordinates of the transformed points. Set them to zero or produce (x,y) data by the trivial xy projection [(x,y,z) => (x,y)] only if this is expected by your 2D best fit circle function or method.
  5. Apply 2D methods to obtain a best fit circle on the xy-plane data.  See here for how to set this up in Excel.
  6. Apply the inverse transform (T–1) to your resulting best fit center to put it into the best fit plane.
Tip: Be watchful of whether your rotational transforms are assuming column vectors or row vectors. Many textbooks give column vector representations, but some programs may assume a row vector.  In the case of Maxima, it will assume a row vector if the vector is given first and a column vector if it is given last:

m1: matrix([c1,-s1,0],[s1,c1,0],[0,0,1]);
r: [a,b,c];
m1.r;            /* column vector representation */
r.transpose(m1); /* row vector representation */

(%o1) matrix([c1,-s1,0],[s1,c1,0],[0,0,1])
(%o2) [a,b,c]
(%o3) matrix([a*c1-b*s1],[a*s1+b*c1],[c])
(%o4) matrix([a*c1-b*s1,a*s1+b*c1,c])

Monday, February 24, 2014

Make (Polygon) Faces Anywhere and Produce a Sketchup Model

Sketchup Make is a helpful tool from Google that allows you to fairly quickly create a visual of your building project ideas.  The commands are relatively simple, although you can get a great deal of (intentional) variety in your results with practice.  Sketchup (including Sketchup Make) also allows the user to extend the functionality of the program using Ruby scripts (I say scripts because in Sketchup they are not compiled prior to use but either interpreted or compiled/run every use—not sure which).

But if you'd rather not learn Ruby right now or if perhaps programming just isn't your thing, there's a simple tool that can help you turn columns of numbers into a Sketchup model. I've written a text file importer (specifically, *.csv file format) that will interpret each row of numbers as the vertices of a face or of a polyline (depending on the beginning of the line).

The deal is, you need to make a CSV file (Comma Separated Values) which has all of the points you need in it. You can make these in Excel.
Fig. 1. Screen shot of a *.csv file open in Excel. Each group of 3 consecutive numbers is interpreted as a 3D point.
Any number of points is allowed per line, though you'll need them to be co-planar if you want a face.
The next two screen shots show an example of how you can set up a spreadsheet to calculate a segment approximation of a sphere. (Kind of like a segment of an orange.)  Click on them to see them at full size.

Fig. 2. Segment of a sphere calculation
Fig. 3. View of the formulae used. Not very complicated, but very organized.
Note that you will need to have the vertices for a face in either clockwise or counter-clockwise order. Copy the vertices on the right, go to a separate sheet of the workbook, and use the paste values option. Save your spreadsheet and then Save As, select "CSV" and name your file appropriately. (This will make the CSV file you need but if you want to go on making changes to your original spreadsheet, you would be best to reopen that spreadsheet. As per the usual behaviour of Save As, your original spreadsheet prior to Save As is now closed and you only appear to still be in the same spreadsheet. Kind of irritating.)

Fig. 4. Save As dialog in Excel. Select CVS format.
Here is an animation of a model that I started with a CSV file using the paraboloid tab of the pictured spreadsheet. I then did a rotate-copy a few times to get the completed shape.

8 segment approximation to a paraboloid from Darren Irvine on Vimeo.

What Do I Need in the CSV File?

Each line must start with line or face followed by a comma. Commas are what separate the entries. New lines separate each line and face. The lines are treated as polylines.  For a face you will need at least 9 numbers (or 12, 15, 18, etc., numbers) separated by commas after the first item, like so,

     face,0,0,0,10,10  ,11 ,  55,100 ,  44

Extraneous spaces are okay as long as they don't interrupt a number. It's okay if your lines trail off, as in


But you don't want blank entries in the middle like


or random non-numeric data like this

     face,0,0,don't put this here,0,face (not here, at the beginning!), 2343, random illegᾳλ χaρaχτeρs: θ, 40,44

Faces will only work as intended if you list the points in either clockwise or counter-clockwise order. You'll end up with a tangled mess otherwise. As for polylines (line), the script will just assume you know what you're doing, although you still should give it point information as for faces.

My script shouldn't cause your system to crash if you feed it bad data, but just the same, play it safe and save your Sketchup model prior to running the script. The import does not start a new file on its own, it imports into the current model—which may be empty.

What Tools Do I Need?

  • Sketchup Make (or above): Sketchup Make is free.
  • dsi_csvfaces.rb: you can obtain the ruby here. Copy it into your Plugins directory. In Sketchup 2015 this is C:\Users\<your_windows_user_name>\AppData\Roaming\SketchUp\SketchUp [n]\SketchUp\Plugins (see
  • An example:  Have a look at my example spreadsheet which contains a way to approximate a segment of a sphere and a paraboloid.
  • A little ingenuity: make the CSV file anyway you know how. Excel is only one option, but I think it's one that many users are likely to be adept at. You may want to use a programming language (perhaps you're a programmer, but not into Ruby or have intellectual investment in some other means). You may want to use a computer algebra system like Maxima (free), Maple V, or Mathematica. Maybe Javascript. All at your option. You just need a CSV file.

Friday, February 7, 2014

Round 'em up!

Over the past several months I have been dabbling in Common Lisp (LISt Processing language).  Lisp, I think, is the ugly duckling of the programming languages.  The facetious "Lost In Stupid Parentheses" seems more apt as an acronym.

My first encounter with this ungainly language (Lisp) was in AutoLisp (and Visual Lisp). I was on a hiatus from programming regularly and discovered this immediately discouraging language that was the "easiest" way to interact with AutoCAD programmatically. (I don't consider AutoCAD scripts to be programming, though I would still include them under the more general category automation.) I did not make much headway at that time and it frankly amazes me that any non-programmer will step up to the challenge and learn it.

But then there was Maxima. I was happy with Maxima as a free computer algebra system. But in the documentation I found the spectre of Lisp. Specifically, Common Lisp. The computations in Maxima are all in Lisp (though there is probably some non-Lisp they use to glue together a Windows interface for Maxima). Maxima itself supports a fairly robust set of programming constructs as far as doing calculations are concerned.
  • Input/output from files
  • Controlled loops
  • if-then-else
  • functions
  • plotting (via gnuplot—included in standard installs)
  • formatted output
    • nice math notation display
    • export to latex or image
Maxima is built on top of Lisp and you really have all the abilities of Lisp in Maxima (if you know how to get at them). But, being as I am, always in search of a better way of doing something, I am also concerned with whether a technology built on top of another is "hiding" some of the features of the underlying technology. Perhaps hiding isn't exactly the right way to put it, but learning Lisp is a paradigm shift from most other (Algol-like) programming languages I was familiar with, while on the other hand, learning Maxima didn't take me through that paradigm shift.

The two most interesting features of Lisp to me are the emphasis on function references and macros. But they both help you to do the same thing: reduce the repetition in your code. Mind you, they serve this end in very different ways.

When I wanted to implement Jarvis's march [1, p. 955] to encircle a list of points in a "convex hull" it was particularly the function referencing that made the implementation of the code fairly simple.

Fig 1. The red lines join together the red + signs which indicate the convex hull for these points.
The general idea of Jarvis's march is you find an outside point to start from and then keep picking the next point so that you never have to switch directions as you walk around. I started at the left most point. In the event of a tie, I choose the lower of these. (That would place us at about (0.1, 15.0).) I decided to walk around the points in a counter-clockwise direction. To decide which point comes next, I need to find the point which requires the right-most turn. So I begin by choosing the first of the remaining points (in whatever order I received them). I go through the remaining points and test whether I would need to make a right turn or left turn to point toward them (from my vantage point on the starting point of the hull chosen at my first step). If it is a right turn, then I have a new winner which replaces the first and I continue moving forward to the points not yet checked. The algorithm is really a repeated application of a maximum function where maximum is defined according to a different function than normal. What I need are predicate functions which define the rules for choosing a winner (a "maximum") and a routine to separate the winner from the remaining points that can accept a predicate function as a parameter.  Here's the code for, remove-winner:

The idea of remove-winner is to use the test function (supplied by the caller) to determine whether the next item is "better" than the current winner. If you passed in the function < (using the syntax #'<) you would get the smallest value in the list (provided it was a list of numbers). Notice the thought process in this function is not just to find the max and return it, but rather to produce a list which includes the "winner" and the rest of the values (that didn't win) separately. We don't just need to know the winner, we need to separate the winner from the rest.

When we get to the convex-hull routine, we can use the same routine (remove-winner) using two different tests: one to find the initial element and another to find the successive elements.

The first occasion we use remove-winner, the function is of the same sort we would use to perform a two key sort. In the main body of the routine, we have a more interesting function which accepts two points, calculates their direction vectors (using the current head node of the hull as the origin) and decides whether the second is a right turn away from the first, in which case it declares the second to be the (current) winner.

Can you do this [abstract the predicate] in other languages? Yes. In many other languages you could implement the routine substantially like this. But the key is this: would you ever think to do it this way if you were using VB.NET or C# or C++? Maybe. But not because of your experience with those languages, but because of your experiences with Lisp or F# or other languages which feature a functional style of programming.

Fig. 1 was produced from running a small bit of Maxima code which loads some routines from a separate package written in Lisp (including the above routines).  You can download both files to try them out:
You will need to have Maxima installed to try this example out.

  1. Cormen, T.H., Leiserson, C.E., Rivest, R.L., & Stein, C., Introduction to Algorithms, Second Ed., McGraw-Hill Book Company / The MIT Press, 2001.

Tuesday, January 28, 2014

Find the Intersection of a Plane with a Line

We can represent a plane with four numbers, the coefficients of the general equation of a plane, as ax + by + cz + d = 0.  For convenience, we can represent this also as
where n = (a, b, c) and r = (x, y, z) is a point on the plane.

We take a line through two points, A and B, in vector form as

        L(t) = A + (B–A)t

We want to find the intersection of the line with the plane.  That is, we need f, such that

And so
The desired point is L(f), since it on the plane and on the line.

Notice that the calculation of f requires only the evaluation of the general equation of the plane using A and B as the points. The results will not be zero unless A and B are on the plane, in which case we would already have the result. If the line is parallel with the plane, then the denominator will be zero giving an indeterminate result. This fits with the geometric interpretation of the plane evaluation. The point of the last step in our derivation is that it simplifies the coding of the problem somewhat by reducing it to two plane evaluations. (If you observe carefully, you will see it makes the difference not of two additions but only of one.)

Evaluating the equation of a plane using a point not on the plane gives a result which is proportional to the distance between the planes. If the normal vector (n = (a, b, c)), has a magnitude of 1, then it yields the distance. However, in the formula we have derived, we have not depended on the equation of the plane being normalized. The constant of proportionality in the numerator cancels with itself in the denominator (I mean "behind the scenes" or if you derived it using a different approach) to produce the simple result for f above.

The idea for this solution came from initially viewing it as a linear interpolation problem, which, of course, it is. Similar triangles also are an acceptable approach the problem, though it may be more difficult to deal with the signs. The above derivation deals with sign transparently.

Wednesday, January 15, 2014

Sum of Powers

In the sum of powers problem, we are looking for the sum of the pth power of the first n integers. In this post, we consider the problem of determining a polynomial expression in n for a given power. The best known case is for p = 1, namely,
More generally, we should like a formula for a polynomial fp such that
We will begin by defining a function and proceed to prove that it satisfies the above equality. We define:

Proposition 1. For all real numbers n and natural numbers p,
Proof:  First we observe that (n + 1) – n = 1 = (n + 1)0, so that we have our induction base set. Suppose that for any real number n > 0 and a given natural number p we have,

Now, given any real number n > 0,
which completes the argument.■

What remains is an induction argument to show the sum (restricting n to natural numbers), but this is left as an exercise.  The formula we have given for fp allows us to calculate the first p sum of powers polynomials in time O(p2) which is a big improvement when compared to solving a system of linear equations to find the largest of them, which would require time O(p3) (to find all of them using a system of linear equations requires O(p4)).

The following Maxima program and its output, gives a demonstration of how to implement the above formula:


Wednesday, January 1, 2014

The Un-Pyramid

In previous posts I've discussed truncated pyramids and irregular triangular prisms, but I hadn't at that time thought to consider a "really irregular triangular prism" before.  I was searching on the general notions of these topics on the internet to see what people were thinking about with respect to them and found someone had something different in mind:  A 5 sided solid with a non-constant triangular cross-section and no faces (guaranteed) parallel with each other.

The first thing to note about this shape is the characterization of the faces.  The two "ends" are triangular faces. There are three quadrilateral faces which form the boundaries of the non-constant triangular cross-section. The three edges shared by the quadrilateral faces are not parallel with each other and the quadrilaterals could be completely irregular. Here is a quick video of a Sketchup model of just such a shape:

The Un-Pyramid from Darren Irvine on Vimeo.

Since the consecutive pairs of "side" edges are part of a quadrilateral (they are proper "faces"—no twists), they are certainly coplanar. In a previous post, I demonstrated that two parallel faces with edges joining corresponding vertices with consecutive edges being coplanar, the parallel faces are necessarily similar. I also demonstrated (Proposition 4) that the projected side edges intersect at a common point, which we may call the apex.

The un-pyramid does not have the two parallel faces that formed the starting point in that post. However, it does have the consecutive coplanar edges. And although we have not been given a "top" face parallel with the bottom, we can certainly make one. We establish a cutting plane through the closest vertex (or vertices—there could be two at the same level) of the "top" face which is parallel with the larger "bottom" face. (We can turn the un-pyramid around to make the names fit if it comes to us upside down or sideways.) Now we can be sure that the side edges can be projected to a common apex as per 3D Analogue to the Trapezoid (part 3).
Fig. 1.  A really irregular triangular pyramid? Hmm...
Calculating the volume of this shape requires a bit of calculation just to figure out how to break it up into pieces. If you want to program a solution to this problem, you are probably going to take in A1, B1, C1, A2, B2, and C2 as the parameters. To verify you have valid data establish that:
  • A1A2 is coplanar with B1B2
  • B1B2 is coplanar with C1C2
  • C1C2 is coplanar with A1A2
  • Also, make sure these same edges don't intersect each other (meaning the lines projected through them should intersect outside of the boundaries of the line segments).
You also need to determine whether the apex is closer to A1B1C1 or A2B2C2. Then reorder or relabel the end faces as necessary. The face further away is the base (say 1). Now determine which of the top vertices is closest to the base and establish a cutting plane through it, parallel to the base.

(Hint: It will be convenient to reorder or relabel your points so that A2 is the closest point, B2 next, and C2 furthest; this avoids polluting your code with cases. Don't forget to reorder the points in the base. You may have a job making the relabeling of previous calculation results work properly. Making clear code that does minimal recalculations is probably the most challenging aspect of this problem if treated as a programming challenge.)

Now we can calculate the volume as a sum of two volumes:
  1. Frustum of a (slanted) pyramid with base A1B1C1 and top A2BC.
  2. Slanted pyramid with base BB2C2C and apex A2
    • Find the area of the base
      • watch out for B = B2 and C = C2 which changes the shape of the base
        • If both are true, then you've dodged the bullet—there's no volume to worry about.
        • If B = B2, then either it is at the same level as C2 or the same level as A2. But in either case, you have a triangular base.
        • Very possibly neither of these will be true. Break the quadrilateral up into two triangles using the diagonal of your choice.
    • Find the height of the pyramid using the component method as for finding the height of the first volume.
But, what should we call it?
  1. triangular un-pyramid
  2. really irregular triangular prism
  3. skew-cut slanted triangular pyramid
    • alternatively, skew-cut slanted pyramid with triangular base
I guess I like 1) for the advertizing, 2) for the baffling rhetorical effect, and 3) for half a hope of conveying what the shape really is.