Friday, February 12, 2021

EMACS with Slime on Chromebook

Because of a felt-insufficiency of nerdness in my life, I decided it was high time I get set up to use slime in EMACS on a Chromebook. I have been an EMACS user on Windows, using it for my first experience of Common Lisp development. I now use LispWorks on my Windows machine but if there was a way to do Common Lisp on my Chromebook I thought I would hazard some of my time to learn how to go about it.

First, get Linux on your Chromebook. For newer systems, this is as simple as turning on Linux Beta in the user settings of your Chromebook. I don't know how well this works on older Chromebooks, but I understand there are ways.

With Linux enabled or installed as required, go to the terminal (settings key, type linux or terminal and you should see it) and type sudo apt-get install emacs25. I had to run an update to apt-get for this to work properly (sudo apt-get update). Then I redid the emacs25 install (same command line) and it caught up the parts it missed the first time. 

Similarly, you can install a version of Common Lisp. I chose Steel Bank Common Lisp (sbcl). The command line for that is (perhaps predictably) sudo apt-get install sbcl.

Quicklisp is next. I downloaded it from here. I moved it into my Linux files because it seemed like a good idea off hand, though I'm not sure that it is strictly necessary. Before trying to load this, I checked out the (rather simple) file structure on my Linux system from the terminal perspective. Use the ls command to see the files in the current directory and the cd command to change directory (I'm not really a Linux user, maybe you're not either). Next, I started sbcl by typing it at the command line. this results in bringing you to a command line version of sbcl where you can type Common Lisp commands. We use this to load Quicklisp. I issue the following, pressing enter after each line:

    (load "quicklisp.lisp")
    (quicklisp-quickstart:install)
    (ql:add-to-init-file)
    (ql:quickload "quicklisp-slime-helper")
    (quit)

Now, you need some code added to the the init file, but I didn't see such a file and wasn't sure where to put it. Fortunately, there's a way for EMACS to help you with that. Go into EMACS and type CTRL+x CTRL+f, it will assume you want the home directory and lead you with "~/", and so you add to that emacs.d/init.el and press enter. Or, in EMACSese, that's C-x C-f ~/emacs.d/init.el RET. HT to the Wiki. This will create and open the file for you. 

The code you will want to put in this file is suggested at the end of installing the quicklisp-slime-helper and it is this:
    (load (expand-file-name "~/quicklisp/slime-helper.el"))
      (setq inferior-lisp-program "sbcl")
Paste the code into the file you just opened. Now, be prepared for a "wha?" moment. The shortcut sequence for pasting into EMACS is CTRL+y. Or, C-y as the EMACS users like to say. This will not be your last such moment, but carry on.

You are now prepared to start writing and executing Common Lisp code in EMACS. In the below, C- means press and hold CTRL and M- means press and hold ALT. I do the following sequence as a standard way to get started:
    C-x 3 (gets you set with vertical division--left and right panes. one for code file and one for REPL)
    M-x slime RET (starts the REPL)

Here is the classic video tutorial of most of these steps done for Windows that also gives a bit of a flavor of what it is like to use Slime in EMACS for Common Lisp development: Installing Common Lisp, Emacs, Slime & Quicklisp.

Thursday, December 24, 2020

CLSQL in LispWorks: Varchar Fields

 I was attempting to use CLSQL in LispWorks and had trouble reading in strings. I could even output the strings using insert, but couldn't read them in properly. I had two main issues that I needed to make changes for (running LispWorks 7.1.2 64-bit running on Windows 10).

First, LispWorks didn't like a return type of :unsigned-long-long in "...\quicklisp\software\clsql-20160208-git\uffi\clsql-uffi.lisp". Replace these references to :unsigned-long-long with '(:unsigned :long :long) and then it will compile properly. I can't speak for whether this 100% right and I don't think that any of my current code depends on getting this right. But, it does let me compile.

The next problem was in "...\quicklisp\software\uffi-20180228-git\src\strings.lisp", the function fast-native-to-string. There's two such definitions which are split across different versions of lisp. LispWorks 7.1.2 is in the first such definition and there is a reference to base-char that is specifically used for LispWorks. I changed base-char to be the same as for non-lisp versions, namely, (signed-byte 8).



I made a simple table with data that contains both varchar(50) and nvarchar(50) to test with, defined below. I put (mostly) the same data in each of Name and Name1 to see if there was any difference.


Using clsql to get all of the tuples that were entered is as simple as (clsql:query "select Id, Name, Name1 from TableWithStrings").

I recognize that I have not here addressed Unicode support which is what the base-char type was probably intended to support (just set to which one you want to use in your application?). I'm not able to find who to contact regarding this as I see that the website for the project may have been abandoned (last change on the change list was in Jan 2016, specifically for LispWorks support). I note however, that someone more recently made a change for uffi as the folder name has the date in it suggesting Feb 2018.

I would obviously be reticent to make the change I have made above in the master project for uffi as I don't know the full range of usage. However, for my particular combination of situation and needs, it worked.

Sunday, September 6, 2020

The Un-Pyramid

In previous posts I've discussed truncated pyramids and irregular triangular prisms, but I hadn't at that time thought to consider a "really irregular triangular prism" before.  I was searching on the general notions of these topics on the internet to see what people were thinking about with respect to them and found someone had something different in mind:  A 5 sided solid with a non-constant triangular cross-section and no faces (guaranteed) parallel with each other.

The first thing to note about this shape is the characterization of the faces.  The two "ends" are triangular faces. There are three quadrilateral faces which form the boundaries of the non-constant triangular cross-section. The three edges shared by the quadrilateral faces are not parallel with each other and the quadrilaterals could be completely irregular. Here is a quick video of a Sketchup model of just such a shape:


The Un-Pyramid from Darren Irvine on Vimeo.

Since the consecutive pairs of "side" edges are part of a quadrilateral (they are proper "faces"—no twists), they are certainly coplanar. In a previous post, I demonstrated that two parallel faces with edges joining corresponding vertices with consecutive edges being coplanar, the parallel faces are necessarily similar. I also demonstrated (Proposition 4) that the projected side edges intersect at a common point, which we may call the apex.

The un-pyramid does not have the two parallel faces that formed the starting point in that post. However, it does have the consecutive coplanar edges. And although we have not been given a "top" face parallel with the bottom, we can certainly make one. We establish a cutting plane through the closest vertex (or vertices—there could be two at the same level) of the "top" face which is parallel with the larger "bottom" face. (We can turn the un-pyramid around to make the names fit if it comes to us upside down or sideways.) Now we can be sure that the side edges can be projected to a common apex as per 3D Analogue to the Trapezoid (part 3).
Fig. 1.  A really irregular triangular pyramid? Hmm...
Calculating the volume of this shape requires a bit of calculation just to figure out how to break it up into pieces. If you want to program a solution to this problem, you are probably going to take in A1, B1, C1, A2, B2, and C2 as the parameters. To verify you have valid data establish that:
  • A1A2 is coplanar with B1B2
  • B1B2 is coplanar with C1C2
  • C1C2 is coplanar with A1A2
  • Also, make sure these same edges don't intersect each other (meaning the lines projected through them should intersect outside of the boundaries of the line segments).
You also need to determine whether the apex is closer to A1B1C1 or A2B2C2. Then reorder or relabel the end faces as necessary. The face further away is the base (say 1). Now determine which of the top vertices is closest to the base and establish a cutting plane through it, parallel to the base.

(Hint: It will be convenient to reorder or relabel your points so that A2 is the closest point, B2 next, and C2 furthest; this avoids polluting your code with cases. Don't forget to reorder the points in the base. You may have a job making the relabeling of previous calculation results work properly. Making clear code that does minimal recalculations is probably the most challenging aspect of this problem if treated as a programming challenge.)

Now we can calculate the volume as a sum of two volumes:
  1. Frustum of a (slanted) pyramid with base A1B1C1 and top A2BC.
  2. Slanted pyramid with base BB2C2C and apex A2
    • Find the area of the base
      • watch out for B = B2 and C = C2 which changes the shape of the base
        • If both are true, then you've dodged the bullet—there's no volume to worry about.
        • If B = B2, then either it is at the same level as C2 or the same level as A2. But in either case, you have a triangular base.
        • Very possibly neither of these will be true. Break the quadrilateral up into two triangles using the diagonal of your choice.
    • Find the height of the pyramid using the component method as for finding the height of the first volume.
But, what should we call it?
  1. triangular un-pyramid
  2. really irregular triangular prism
  3. skew-cut slanted triangular pyramid
    • alternatively, skew-cut slanted pyramid with triangular base
I guess I like 1) for the advertizing, 2) for the baffling rhetorical effect, and 3) for half a hope of conveying what the shape really is.

Trimming the Split Ends of Waveforms

In a recent post about making sounds in F# (using Pulse Code Modulation, PCM) I noted that my sound production method was creating popping sounds at the ends of each section of sound. I suggested this was due to sudden changes in phase, which I am persuaded is at least involved. Whether or not it is the fundamental cause, it may be a relevant observation about instrumental sound production.

One way to make the consecutive wave forms not have pops between them might well be to carry the phase from one note to the next as it would lessen the sudden change in the sound. Another way, probably simpler, and the method I pursue in this post is to "back off" from the tail—the "split end" of the wave form—and only output full cycles of waves with silence filling in the left over part of a requested duration. My experimentation with it suggests that this approach still results in a percussive sound at the end of notes when played on the speakers. (I suppose that electronic keyboards execute dampening1 when the keyboardist releases a key to avoid this percussive sound.)

The wave forms I was previously producing can be illustrated by Fig. 1. We can reduce the popping by avoiding partial oscillations (Fig. 2.). However, even on the final wave form of a sequence wave forms, there is an apparent percussive sound suggesting that this percussive effect is not (fully) attributable to the sudden start of the next sound. Eliminating this percussive effect would probably involve dampening. Either the dampening would need to be a tail added to the sound beyond the requested duration or a dampening period would need to be built in to the duration requested.

Fig. 1. A partial oscillation is dropped and a new wave form starts at a phase of 0.0. The "jog" is audible as a "pop".
Fig. 2. "Silent partials" means we don't output any pulse signal for a partial oscillation. The feature is perceivable by the human ear as a slight percussive sound.

It's worth thinking a bit about how typical "physical" sound production has mechanisms in it which "naturally" prevent the popping that we have to carefully avoid in "artificial" sound production.
  • In a wind instrument, you can't have a sudden enough change in air pressure or sudden enough change in the vibration of the reed or instrument body to create pops.
  • In a stringed instrument, alteration of the frequency of the vibration of the string will maintain the phase of the vibration from one frequency to the next. 
    • The wave form is not "interrupted" by anything you can do to the string. There are no truly "sudden" changes you can make to the vibration of the string. 
    • Any dampening you can do is gradual in hearing terms. 
    • A "hammer-on" a guitar string does not suddenly move the position of the string with respect to its vibration period—phase is maintained.
  • In a piano, dampening does not create sufficiently sudden changes in the wave form to create a pop.
In short, (non-digital) instrumental sound production avoids "pops" by physical constraints that produce the effects of dampening and/or phase continuity. Digital sound production is not inherently constrained by mechanisms that enforce these effects.

I have altered the sinewave function to fill the last section of the sound with silence, following the pattern suggested by Fig. 2. This does leave an apparent percussive effect, but is a slight improvement in sound.



Something this experiment does not deal with is whether we are hearing an effect of the wave form, per se, or whether we are hearing the behavior of the speakers that are outputting the sound. A sudden stop of something physical within the speaker might generate an actual percussive sound. Also still outstanding is whether the human ear can perceive phase discontinuity in the absence of an amplitude discontinuity.

1 Dampening is a (progressive) reduction in amplitude over several consecutive oscillations of a wave.

Sounds Like F#

Last time, I thought that I was going to have to do some special research to account for multiple sources of sound. Reading this short physics snippet I can see that multiple sources is really as simplistic as my initial thinking was inclined to. You can ignore all of the decibel stuff and focus on the statements about amplitude, because, of course, I am already working in terms of amplitude. What can I say, I let the mass of voices on the internet trouble my certainty unduly? Sort of.

There's two important issues to consider:
  1. Offsetting of the signals.
  2. Psychological issues regarding the perception of sound.
Regarding number 2, here's a quick hit I found: http://hyperphysics.phy-astr.gsu.edu/hbase/Sound/loud.html. It seems credible, but I'm not sufficiently interested to pursue it further. And so, my dear Perception, I salute you in passing, Good day, Sir!

Offsetting of the signals is something that is implicitly accounted for by naively adding the amplitudes, but there is a question of what the sineChord() function is trying to achieve. Do I want:
  1. The result of the signals beginning at exactly the same time (the piano player hits all his notes exactly[?] at the same time—lucky dog, guitar player not as much[?]. Hmm...)?
  2. The average case of sounds which may or may not be in phase?
Regarding case 2, Loring Chien suggests in a comment on Quora that the average case would be a 41% increase in volume for identical sounds not necessarily in phase. At first glance, I'm inclined to think it is a plausible statement. Seeming to confirm this suggestion is the summary about Adding amplitudes (and levels) found here, which also explains where the the 41% increase comes from, namely, from \(\sqrt{2}\approx 1.414\dots\). Good enough for me.

All such being said, I will pass it by as I am approaching this from a mechanical perspective. I'm neither interested in the psychological side of things, nor am I interested in getting average case volumes. I prefer the waves to interact freely. If would be neat, for example, to be able to manipulate the offset on purpose and get different volumes. The naive approach is little more subject to experimentation.
I would like to go at least one step further though and produce a sequence of notes and chords and play them without concern of fuzz or pops caused by cutting the tops of the amplitudes off. We need two pieces: a) a limiter and b) a means of producing sequences of sounds in a musically helpful way.
A limiter is pretty straight-forward and is the stuff of basic queries. I have the advantage of being able to determine the greatest amplitude prior to playing any of the sound. The function limiter() finds the largest absolute amplitude and scales the sound to fit.


Here is an example usage:


One concern here is that we may make some sounds that we care about too quiet by this approach. To address such concerns we would need a compressor that affects sounds from the bottom and raises them as well. The general idea of a sound compressor is to take sounds within a certain amplitude range and compress them into a smaller range. So, sounds outside your range (above or below) get eliminated and sounds within your range get squeezed into a tighter dynamic (volume) range. This might be worth exploring later, but I do not intend to go there in this post.

Last post I had a function called sineChord(), which I realize could be generalized. Any instrument (or group of identical instruments) could be combined using a single chord function that would take as a parameter a function that converts a frequency into a specific instrument sound. This would apply to any instruments where we are considering the notes of the chord as starting at exactly the same time (guitar would need to be handled differently). So, instead of sineChord(), let's define instrumentChord():



Next we will create a simple class to put a sequence of notes/chords and durations into which we can then play our sounds from.

Reference Chords and Values

Here are a few reference chord shapes and duration values. I define the chord shape and the caller supplies the root sound to get an actual chord. The durations are relative to the whole note and the chord shapes are semi-tone counts for several guitar chord shapes. The root of the shapes is set to 0 so that by adding the midi-note value to all items you end up with a chord using that midi note as the root. This is the job of chord().



Next, InstrumentSequence is an accumulator object that knows how to output a wave form and play it. For simplicity, it is limited to the classic equal temperment western scale. Note that it takes a function has the task of turning a frequency into a wave form, which means that by creating a new function to generate more interesting wave forms, we can have them played by the same accumulator. If we only want it to produce the wave form and aggregate the wave forms into some other composition, we can access the wave form through the routine WaveForm.



A sample use of the class which follows a very basic E, A, B chord progression follows:



To get the same progression with minors, use the EmShape, etc., values. But, at the end of the day, we've still got sine waves, lots of them. And, it's mostly true that,

                           boring + boring = boring.

I will not attempt the proof. You can make some more examples to see if you can generate a counter-example.

There is also some unpleasant popping that I would like to understand a bit better—for now I'm going to guess it relates to sudden changes in the phases of the waves at the terminations of each section of sound. Not sure what the solution is but I'll guess that some kind of dampening of the amplitude at the end of a wave would help reduce this.

The Sound of F#

I'm doing a bit or reading and dabbling the in the world of sound with the hope that it will help me to tackle an interesting future project. One of the first steps to a successful project is to know more about what the project will actually entail. Right now I'm in exploratory mode. It's a bit like doing drills in sports or practicing simple songs from a book that are not really super interesting to you except for helping you learn the instrument you want to learn. I'm thinking about learning to "play" the signal processor, but it seemed like a good start to learn to produce signals first. And sound signals are what interest me at the moment.

Sound is pretty complicated, but we'll start with one of the most basic of sounds and what it takes to make them in Windows, writing in F#. Our goal in this post is to play sounds asynchronously in F# Interactive.

The first iteration of this comes from Shawn Kovac's response to his own question found here at Stack Overflow. Below is my port to F#. I have used the use keyword to let F# handle the Disposable interfaces on the stream and writer to take care of all the closing issues. It's important to note that the memory stream must remain open while the sound is playing, which means this routine is stuck as synchronous.



To get an asynchronous sound we can shove the PlaySound function into a background thread and let it go. This looks like the following:



A limitation of the PlaySound() approach given above is that it limits your method of sound production. The details of the sound produced are buried inside the function that plays the sound. I don't want to have a bunch of separate routines: one that plays sine waves, one that plays saw tooth, one that plays chords with two notes, one that plays chords with three notes, one that include an experimental distortion guitar—whatever. The sound player shouldn't care about the details, it should just play the sound, whatever it is. (Not a criticism against Kovac, he had a narrower purpose than I do; his choice was valid for his purpose.)

I want to decouple the details of the sound to be produced from the formatting and playing of the sound. Here we have the basic formatting and playing of the sound packaged up that will take any wave form sequence we give it (sequence of integer values):



I haven't decoupled this as far as we could. If we wanted to handle multiple tracks, for example, you would need to take this abstraction a little further.

Now, we also need some functions for generating wave forms. Sequences are probably a good choice for this because you will not need to put them into memory until you are ready to write them to the memory stream in the PlayWave function. What exactly happens to sequence elements after they get iterated over for the purpose of writing to the MemoryStream, I'm not quite clear on. I know memory is allocated by MemoryStream, but whether the sequence elements continue to have an independent life for some time after they are iterated over, I'm not sure about. The ideal thing, of course, would be for them to come into existence at the moment they are to be copied into the memory stream (which they will, due to lazy loading of sequences) and then fade immediately into the bit bucket—forgotten the moment after they came into being.

Here are a few routines that relate to making chords that are not without an important caveat: the volume control is a bit tricky.



Here is a sample script to run in F# Interactive:



The above plays a version of an A chord (5th fret E-form bar chord on the guitar).

Note that I have divided the volume by the number of notes in the chord. If we don't watch the volume like this and we let it get away on us, it will change the sound fairly dramatically. At worst something like the "snow" from the old analog TVs when you weren't getting a signal, but if you just go a little over, you will get a few pops like slightly degraded vinyl recordings.

We could include volume control in our wave form production routines. This wouldn't necessarily violate our decoupling ideals. The main concern I have is that I want the volume indicated to represent the volume of one note played and the other notes played should cause some increase in volume. I want the dynamic fidelity of the wave forms to be preserved with different sounds joining together. Even after that, you may want to run a limiter over the data to make sure the net volume of several sounds put together doesn't take you over your limit.

However, I need to do more research to achieve these ideals. For now, I will break it off there.

Maxima in Common Lisp via ASDF

I have previously used Lisp code that ran in Maxima. This is fairly straightforward if you know Common Lisp. Basic instructions can be found in Chapter 37.1 of the Maxima help files (look for Contents link at the top of any of the help pages). But, how about the other way around—Maxima from Lisp side?

I recently looked into running Maxima inside Common Lisp using ASDF and found this. That gave me hope that it was possible and a little insight into what it was trying to do. I have sometimes wanted to create a user interface in some other environment (such as LispWorks) and call upon Maxima to do some computations for me.

To get yourself set up to access Maxima from Common Lisp you need:
  1. The source, including the maxima.asd file.
  2. Some idea where to put stuff and, perhaps, a few tweaks to the maxima.asd file.
  3. Some file search variable set up to allow Maxima load statements in your common lisp code so you can access functionality found in the share packages or your own packages.

Tweaks

I put the source under my common-lisp directory so that a simple call to (asdf:load-system :maxima) would work.

I found that I needed to edit the maxima.asd file. I don't want to tell ASDF where to put the result or interfere with its "normal" operation. "ASDF, you're in charge!" is basically how I look at it. Accordingly, I commented out the output-files :around method. I suppose you could change the *binary-output-dir* here as well, if you would like to—perhaps to suit whichever Lisp you are compiling to.

Fig. 1. I'm pretty sure I don't need this code for my purposes, so I have it commented out.

Accessing Shared Packages

You might not realize how much stuff is in the share packages if you have been using the wxMaxima interface only. It appears that a number of widely used share packages are automatically loaded in wxMaxima, but maxima.asd only loads the core components. This is not a serious impediment—you can determine which packages to load from the Maxima help files.

But before you can use the Maxima load command, you need to set the file_search_maxima and file_search_lisp Maxima variables. I combined a call to asdf:load-system with the setting of these variables in a file under my common-lisp directory. I called this file load-max.lisp and it looks a bit like this (note how hairy it gets if you scroll right):



So, if I want to use Maxima in my Lisp session, I just need to call (load "~/common-lisp/load-max.lisp"). I can move to the Maxima package by calling (in-package :maxima).

Try out the diff function:

MAXIMA> ($diff #$x^2$ #$x$)
((MTIMES SIMP) 2 $X)

Note the syntax for referring to the diff function ($diff) and the syntax for an entire maxima expression (#$expression$). There's also some lower to uppercase switching to watch out for, but see chapter 37 of the Maxima help documentation for more about this.

Suppose I want to use something in a share package or one of my own custom Maxima packages. I can use the Maxima version of load, namely, $load.

MAXIMA> ($load "stringproc")

MAXIMA> ($parse_string "x^2")
((MEXPT) $X 2)

One more example, to show how to deal with Maxima-specific features that are beyond the scope of Common Lisp. In general, use the meval function as in:

(setf n (* 3 4 5 6 7))
(maxima::meval `((maxima::$divisors) ,n))

((MAXIMA::$SET MAXIMA::SIMP) 1 2 3 4 5 6 7 8 9 10 12 14 15 18 20 21 24 28 30 35 36 40 42 45 56 60 63 70 72 84 90 105 120 126 140 168 180 210 252 280 315 360 420 504 630 840 1260 2520)

(If you're interested, see Maxima-discuss archives. HT Robert Dodier.)