Saturday, August 17, 2019

Using Calculated 1 Rep Maxes as an Index for Progression

I generally use an estimated 1 rep max as the measure of my progress in training for any particular exercise that I do. The FitNotes app somewhat encourages this thought process by providing a graph view that plots estimated 1 rep maxes over time. I don't disagree with the idea in principle. In fact, I had an idea that I might want to make a point of using 1 rep maxes as a method planning my progressive overload.

Progressive overload is the catch phrase that refers to adding additional training stimulus over time. From a given training stimulus, you can increase it a few ways, some of which lend themselves to different goals:
  1. Increase the weight
  2. Increase the repetitions per set
  3. Increase the number of sets
  4. Decrease the rest period
Doing any of these, or a combination of these, can increase the training stimulus so that you encourage additional growth. There's plenty to read out there about the difference between training for strength and training for muscle growth, but there may be a case to be made for balancing both of these goals. 

In very rough, non-technical terms: 
  1. Strength means your central nervous system is better at firing off more muscle fibers--maybe more intensely. Training for strength in this sense is normally done using lower repetitions per set.
  2. Muscle growth means the muscle fibers are bigger. They are usually able to move heavier weight than they used to as a result of being larger. Training for muscle growth is normally done using higher repetitions per set.
These are different strategies for training your muscles, and the divergence between them may or may not admit to synergies in their combination. Or, maybe it just doesn't suck too bad to mix it up a bit. I don't know. I'm not sure anybody really does. But I thought it would be interesting to use calculated 1 rep maxes as an index for the purpose of choosing a combination of weight and rep changes that results in a kind of theoretical progressive overload. I don't imagine I'm doing anything new, but hopefully, the spreadsheet I cooked up can make it easy enough for you to apply in your own training, if you don't think it's crazy.

Estimated 1 rep maxes are based on charts like this one: https://strengthlevel.com/one-rep-max-calculator. The idea is, if I do shoulder press with 95 lbs and I can do 12 reps before failure, I want to know how much weight I could do shoulder press with 1 time. According the chart I linked to, if I can do 12 reps with a given weight, that weight is 71% of my 1 rep max. So, if I divide 95 by 0.71, I am looking at my calculated 1 rep max. So, as long as you have a good chart of percentages, your calculation is pretty easy and is easily done in a spreadsheet using a look up or index function. I nabbed the percentages from my FitNotes app and am using them in my spreadsheet.

Reps % of 1 Rep Max
1 100
2 97
3 94.5
4 91.5
5 89
6 86
7 83.5
8 80.5
9 78
10 75
11 73
12 71.5
13 69.5
14 68
15 66.5

Using this chart on a sheet named '1 rep max percentages', I use the index function to drive a calculation. Along the left hand side I have the weight that is lifted and along the top I have the number of repetitions. To interpret the chart, look on the left hand side for the weight you lifted and find the column that corresponds to the number of repetitions you did. The number at that intersection is your estimated 1 rep max--provided the percentages driving the spreadsheet are legit. If you come across a chart you think is better, you can edit the values in that table and that will be reflected in the estimated 1 rep max.

Weight 1 2 3 4 5 6 7 8 9 10
2.5 2.50 2.58 2.65 2.73 2.81 2.91 2.99 3.11 3.21 3.33
5 5.00 5.15 5.29 5.46 5.62 5.81 5.99 6.21 6.41 6.67
7.5 7.50 7.73 7.94 8.20 8.43 8.72 8.98 9.32 9.62 10.00
10 10.00 10.31 10.58 10.93 11.24 11.63 11.98 12.42 12.82 13.33
12.5 12.50 12.89 13.23 13.66 14.04 14.53 14.97 15.53 16.03 16.67
15 15.00 15.46 15.87 16.39 16.85 17.44 17.96 18.63 19.23 20.00
17.5 17.50 18.04 18.52 19.13 19.66 20.35 20.96 21.74 22.44 23.33
20 20.00 20.62 21.16 21.86 22.47 23.26 23.95 24.84 25.64 26.67
22.5 22.50 23.20 23.81 24.59 25.28 26.16 26.95 27.95 28.85 30.00
25 25.00 25.77 26.46 27.32 28.09 29.07 29.94 31.06 32.05 33.33
27.5 27.50 28.35 29.10 30.05 30.90 31.98 32.93 34.16 35.26 36.67
30 30.00 30.93 31.75 32.79 33.71 34.88 35.93 37.27 38.46 40.00
35 35.00 36.08 37.04 38.25 39.33 40.70 41.92 43.48 44.87 46.67
40 40.00 41.24 42.33 43.72 44.94 46.51 47.90 49.69 51.28 53.33
45 45.00 46.39 47.62 49.18 50.56 52.33 53.89 55.90 57.69 60.00
50 50.00 51.55 52.91 54.64 56.18 58.14 59.88 62.11 64.10 66.67


The contents of cell B2 are =$A2 / (INDEX('1 rep max percentages'!$A$2:$B$16, B$1, 2)/100).  The index function takes three parameters:
  • Look up table: '1 rep max percentages'!$A$2:$B$16
  • Index (row) of table: B$1
  • Column of table: 2
The $ signs are a way of saying to Excel, "when you copy me to another cell, don't change me, keep me the same." So, the parts of the cell references that don't have a $ sign in front of them change when you copy them (relative copy), but the $ sign parts do not (absolute copy). The table of percentages is fixed with the $ signs and the row number (B$1) is fixed so that when you copy the cell down, it still looks to the first row for the number or reps. 2 is the column of the percentage table we need to feed into the formula. The table is pretty easy to make.

Here are links to the spreadsheet in Excel format and a print out to PDF, if you can't be bothered making the spreadsheet yourself.
Progressive Overload
Using the chart to help you with progressive overload is probably pretty obvious, but it maybe deserves some brief comments. Suppose you are doing an exercise with 155 lbs with 3 sets of 12 repetitions. 12 reps is not actually the most you can do in one set, but we don't care about your real 1 rep max. It's just an index. The fact that the index is used sometimes for calculating 1 rep maxes doesn't matter to us. It is just a way of guessing at a progression. We don't care that it isn't a "real 1 rep max", we only care that by interpreting this calculation as an index, we can use it to provide guidance for switching up our weight and reps at the same time.
It's just a number.
But it's a number that can help you.

Figure 1 shows some highlights of a way forward from 3 sets of 12 reps with 155 lbs to gradually move toward lower reps and higher weight while (hopefully) targeting an effectively greater stimulus. After you manage a full 3 sets of 12 reps, on your next workout after that, try for 3 sets of 10 with 165 lbs. In theory, if you achieve that, you have improved your strength (in some sense). If you don't succeed, you try again next work out, until you do succeed. Your next target could be 3 sets of 9 with 175 lbs. 

Fig 1. Highlights show a possible path of progression.
To the extent that this method of indexing the effective training stimulus may be valid, it can be used in a variety of ways. If you have been lifting very heavy, low reps, and you want to provide your tendons with a break or just change things up and lift light for fun, you can use this chart to help you decide what your target should be.

It's important to note though, that if you significantly change the number of reps per set, it will have a potentially large impact on your accumulated fatigue from set to set. My own experience suggests to me that with high repetition work, I accumulate a high amount of fatigue and with lower repetition work there is less of a fatigue angle involved. This confounds the matter when you make a large change to the number of reps and it probably comes apart at the seams a bit. These numbers may still help you get in the ball park.

Friday, May 24, 2019

Recursion and Accumulators

Working through Paradigms in Artificial Intelligence Programming, I have worked to wrap my head around accumulators and I have implemented one that worked surprisingly quickly. My example is a factorial design experiment. I have 4 reference implementations of a factorial design on an experiment with 7 factors, with each factor having 10 treatments in it. This is a bit of a ridiculous size, but it gave me clear separation of performances. I have split the code up into sections so you can focus on the parts of the program that matter you.

Supporting Code



Main Functions



Speed Testing

To make the testing as fair as possible, I started emacs fresh for each test run. Under these conditions, factorialize took 2.5 seconds, factorialize-2 took 3.7 seconds, factorialize-3 took 22 seconds, and factorialize-4 took 12 seconds.

A large amount of the time spent in these examples is spend in garbage collection. The factorialize-4 example could probably be improved by allocating a large chunk of memory in a 2-dimensional array and then the rest of the work involves a lot less memory allocation time. The land slide difference that marks factorialize off from the other methods appears to be the amount of shared memory it involves. Some reassignments of various locations of the resulting list, produce changes in other lists since there is shared cons cells among the lists. For my particular application this was proving to be a problem and so I have changed over to the factorialize-4 version of the function. Since my actual likely data is much smaller than that used in the speed tests of this post, I'm not overly concerned about the difference in speed.

An advantage of the method of factorialize-4, which uses an enumerator is that this method could be used to output data to a database or file in a way that didn't need to use very much run-time memory. Very likely in these cases, the time spent sending and saving data to persistent storage is much greater than the computational effort. The factorialize and factorialize-2 methods do not lend themselves to this as they do not generate complete tuples until all of the tuples are created. factorialize-4 creates complete tuples one at at time and these can be acted on immediately without waiting for all of the output to be generated.

Common Lisp Pipes→Enumerators

I've been plodding along through Peter Norvig's Paradigm's in Artificial Intelligence Programming and finding some helpful tips along the way. I especially enjoyed the discussion of pipes in chapter 9. I was intrigued by the pipes implementation of the sieve or Eratosthenes and wondered what else I could do with pipes.

Something that I didn't like about pipes is that they keep around a list of the results, when maybe you are actually done with those results. I was taking combinations of factors to produce a factorial experimental design and I decided to do so using something I have called multiple-radix-counting. Multiple-radix, meaning the radix (or base) for each digit is different. Imagine an odometer where each spinner has a different number of marks on it. Instead of always going from 0 to 9, some of the spinners might be 0 to 1 or 0 to 5. The sequencing is the same. When you get the end of the right most spinner, you wrap around on the first and advance the next spinner. Here's some sample output the shows the pattern:



I have taken the definitions from PIAP and modified them to avoid hanging on to the part of the pipe that has gone past. I have renamed certain functions/macros and made minor changes to them to produce a result that behaves differently. Instead of make-pipe, I have make-enumerator, although the macro is identical. It's important that I have a new name for this macro that suits the different usage in case I decide to implement enumerators differently in the future. The function get-current and get-next are the replacements for head and tail, respectively.  The representation is very similar to that used for pipes. I use a cons cell where the car is the current value and the cdr is a function call.



The purpose of the extract-by-multiple-radix is to take a selection out of source (an array of arrays) based on the indices that were specified. If you were doing multiplication of multiple term factors, you would take the terms by index from each factor and multiply them. To get the factors based on index, you would use extract-by-multiple-radix.

Friday, April 19, 2019

"and" and "or" in Set Theory and English

Relating the English words "and" and "or" to concepts in set theory and logic can be done by starting with basic examples and reasoning by analogy to more complex cases. It must also be remembered that human language is generally less precise than formal logic. In natural language, we recognize a range of meaning for a word and so we must look not for one definition, but a few definitions. Here, below, I attempt to give a listing of the use-cases of the words "and" and "or".

and:

  1. between two objects, P and Q means the set \(\{P, Q\}\).
  2. between two sets, P and Q means the set \(P\cup Q\).
  3. between two propositions, P and Q means that both P is true and Q is true. (It is difficult to avoid a self-referencing definition.)

A possible objection to use-case 2 for "and" is that the conjunction "and" is used in the definition of intersection and so P and Q on sets should refer to set intersection, not union. However, a set or a list or the designation of a category refer to objects (conceptual ones or otherwise). Our best analogy is with with use-case 1, where we have both items included in the set. This conforms to customary usage. For example, in the designation The Department of Math and Computer Science, customary usage indicates this means "the department which contains math courses and computer science courses", or, more expansively, "the department which contains math courses and contains Computer Science courses". We recognize this rendering as use-case 3, which serves as the argument for use-case 2. If we called it The Department of Math or Computer Science or The Department of Math and/or Computer Science, this would imply that the course name would be valid if it contained math courses only, computer science courses only, or both types of courses. The name of such a department would not inspire confidence in the applicant interested in one of these areas specifically, that the department offered the kind of courses he/she was interested in.

or:
  1. between two objects, exclusively, either P or Q means exactly one element of the set \(\{P, Q\}\).
  2. between two object, inclusively, P or Q means any element of the set \(\{P, Q\}\).
  3. between two sets, exclusively, either P or Q normally means either an element of P or an element of Q, according to the sense of use-case 1.
  4. between two sets, inclusively, P or Q normally means an element of \(P\cup Q\).
  5. between propositions, exclusively, either P or Q means exactly one of P or Q is true.
    1. Yes, probably self-referencing again.
  6. between propositions, inclusively, P or Q means any of P is true, Q is true, or both P and Q are true.
    1. Still self-referencing, I guess.
At least some of the above inclusive cases are sometimes expressed in English using "and/or" in place of "or" as a means of indicating the inclusive nature of the conjunction intended. The exclusive uses are sometimes emphasized by using the word "either", as above.

At this point, we start to ask ourselves, "where is set intersection in all this?" Given that union occurs in our definitions, it seems reasonable that we should find set intersection somewhere in the mix. After all, we use "and" and "or" for defining intersection and union, respectively. We can see intersection in restating some of the use-cases given.

We can take our example of The Department of Math and Computer Science and express it in a way that uses set intersection. Let M be the set of departments in a university offering the majority of mathematics courses and let C be the set of departments offering the majority of Computer Science courses. Then \(M\cap C\) is The Department of Math and Computer Science. If we get the empty set, then there is no such department at the given university. This is use-case 3 for "and", expressed differently.

We can also see set intersection in the exclusive use-cases of "or". Use-case 3, in particular, can be expressed as \(P\cup Q - P\cap Q\). Depending on the context, this might appear redundant. For example, we might say that some bathrooms are made for either males or females. We take M as bathrooms intended for male occupancy and F as bathrooms intended for female occupancy. This can be seen as use-case 3, but the intersection is empty. In fact, the nature of the statement may be to emphasize the fact that the intersection is empty and the speaker is really saying \(M\cup F = M\cup F - M\cap F\), that is, "all of the bathrooms I have in mind are for single sex occupancy, none of the bathrooms I have in mind are accepting of both male and female". There may be other bathrooms where the intersection is non-empty (e.g., single occupancy), but not the bathrooms the speaker has in mind.

Saturday, March 2, 2019

Kinetic Energy and the Human Body - Shot Put Illustration

In a previous post (Remembering Gravity), I was estimating the change to my vertical jump if my strength, speed, & power capabilities remained constant and my weight decreased. I mentioned that one of my assumptions was that the amount of change in the weight involved was probably such that the force production of my legs was the same on the light me as the heavy me. I also suggested that if the amount of difference in weight was significant, this assumption would start to unravel. My example to motivate this at an intuitive level was to compare shot putting with a 12 lb shot with shot putting a softball. Intuitively, you can probably recognize, that producing the same "power" with the softball as with the shot would involve an incredible amount of speed and that the result of shot putting a softball would be governed by maximum speed production rather than maximum force production. I thought it would be interesting to put numbers to this illustration.

When I jump or throw or "put", I am imparting force over distance. That force may not be constant and the direction may not be either but at the end of force application I have imparted a net amount of work to an object (my own body, ball, shot). In the end, the object has an amount of kinetic energy
\[W_{NET} = K = \frac{1}{2} m v^2.\]
So, to state my illustration more technically, I am saying that, using an identical movement pattern, allowing only changes in speed of movement of my body, I cannot impart as much kinetic energy to a very light object as I can to a heavier, but still manageable object.

So, to establish the general invalidity of the assumption of equivalent force production under different loading we can put a 7.260 kg shot and a 190 g softball (or 146 g baseball) and determine the approximate amount of kinetic energy imparted. A little hitch would be the need to know the release angle. An interesting post on Brunel University's site (Shot Put Projection Angle) highlights that in practice the optimum release angle is not 45° but may be between 26° and 38° and and this is due to a biomechanical advantage favoring superior speed production in the horizontal direction.

Let's Pretend

Suppose we take a world record shot put of 23.12 m with a 7.260 kg shot, assume a 30° release angle and determine the kinetic energy imparted to the shot by the putter. Then we will make the very erroneous assumption that the same person would have been capable of imparting the same kinetic energy to a softball or baseball, using the same biomechanical movement pattern. We are expecting to get a ridiculous answer which will satisfy us that the assumption that we can do this is bogus.

To determine the kinetic energy of the shot, I need to find the initial velocity. We use a few common kinematic equations. We can use the relationship between the vertical and horizontal velocity which is determined by the release angle, so that \(\tan \theta = v_y / v_x\), where \(\theta\) is the release angle. We neglect the effect of drag from the air. Taking \(X\) as the horizontal distance traveled by the shot, we have
\[X = v_x t_t = \frac{v_y}{\tan \theta} t_t\]
and so
\[t_t = \tan \theta \frac{X}{v_y}.\]
Using the vertical displacement equation, we have
\[0 = d_R + v_y t_t + \frac{1}{2} g t_t^2 \]
\[0 = d_R + v_y \Big(\frac{\tan \theta}{v_y} X\Big) + \frac{1}{2} g \Big(\frac{\tan \theta}{v_y} X\Big)^2\]
\[0 = d_R + X \tan \theta + \frac{g X^2 \tan^2 \theta}{2 v_y^2} \]
\[\frac{X^2 g \tan^2 \theta}{2 v_y^2} = -d_R - X \tan \theta\]
\[2 v_y^2 = - \frac{X^2 g \tan^2 \theta}{d_R + X \tan \theta}\]
\[v_y = X \tan \theta \sqrt{\frac{-g}{2 (d_R + X \tan \theta)}}\]
We have our velocity, \(v_p\) of the shot put, then as \[v_p = \frac{v_y}{\cos \theta}.\]
Here's where we make our faulty assumption. (Faulty, because we are comparing objects of very different masses.) We take the kinetic energy the putter could generate on the shot and assume he can generate the same kinetic energy with a much smaller mass. Hence,
\[\frac{1}{2} m_p v_p^2 = \frac{1}{2} m_s v_s^2 \]
and so
\[v_s = v_p \sqrt{\frac{m_p}{m_s}}.\]
Since we are assuming biomechanical identity, we use the same release angle giving us the vertical initial velocity of the softball, \(v_{sy}\)
\[v_{sy} = v_s \sin \theta \sqrt{\frac{m_p}{m_s}}.\]
and horizontal
\[v_{sx} = v_s \cos \theta \sqrt{\frac{m_p}{m_s}}.\]
From here we proceed to determine how long the ball will be in the air. There will be a time to the peak \(t_p\) and a time to drop \(t_d\) in the total time \(t_t\). For the time to peak we have
\[0 = v_{sy} + g t_p\]
\[t_p = -\frac{v_{sy}}{g}\]
From this we get our maximum height, \(h_p\)
\[h_p = d_R + v_{sy} t_p + \frac{1}{2} g t_p^2\]
\[= d_R - \frac{v_{sy}^2}{2 g}\]
This allows us to find our time to drop from the displacement equation as
\[0 = h_p + \frac{1}{2} g t_d^2\]
\[t_d = \sqrt{\frac{2 h_p}{-g}}\]
So, we can give an expression of our bogus displacement of the softball as a shot put by
\[X_{sb} = v_{sx} (t_p + t_d).\]
We can follow through these expressions to give output without spending the time to make a monster expression out of it. I looked at doing that briefly and it didn't look like there was much for cancellations that made the expression more tidy. So, there is no computation savings by combining the expression into one monster formula. Therefore, since I want a number, I will just put these expressions into maxima and crunch.

Here's the sequence of expressions in maxima:

theta: float(30 * %pi/180);
m_p: 7.260;
m_s: 0.190;
d_R: 2.0;
X_sp: 23.12;
g: -9.81;

v_y: X_sp * tan(theta) * sqrt(-g/(2*(d_R+X_sp*tan(theta))));
v_p: v_y / cos(theta);

v_s: v_p * sqrt(m_p/m_s);
v_sx: v_s * cos(theta);
v_sy: v_s * sin(theta);
t_p: -v_sy/g;
h_p: d_R - v_sy^2/(2*g);
t_d: sqrt(2*h_p/(-g));

X_sb: v_sx* (t_p + t_d);

The final answer comes out at about 259.52 m for a softball. Changing the mass to a baseball, gives 336.71 m.

Athletes who use biomechanically superior form to throw a baseball actually come in more about 136 m or so (Glen Gorbous).

So, if you're trying to use the human body to generate kinetic energy, there's a mass sweet spot somewhere. Too little mass and you just can't generate the speed. Too much mass and you can't move it at all. Somewhere in between is the optimum, if, for reasons unknown, you care about generating a lot of kinetic energy.

Saturday, August 18, 2018

Calorie Deficit

The quest to loose a few pounds of fat and gain muscle involves a few important numbers and concepts. A lot of it centers on the notion of metabolism, which for us average, non-medical people means something like "at what rate do you burn calories?"

In very rough terms, your calorie requirements are driven by:
  1. Basal Metabolic Rate
  2. Activity level
  3. Specific activites
  4. How much you eat

Basal Metabolic Rate

The general description of basal metabolic rate (BMR) is the amount of calories you burn at rest and is possible to get standard calculations from online calculators like this one. These calculators are seemingly based on averages and are dependent on (gender, age, height, weight). 

A consideration not generally included in these calculators is the less commonly available information on body composition such as the relative amount of fat and muscle in the body. This is somewhat relevant information since fat cells and muscle cells burn energy at a different rate. A pound of fat burns 2 calories per day, but a pound of muscle burns 5 calories a day (WebMD). This is a moderate effect. If man A is 200 lb with 25% body fat, then his fat is burning 100 calories per day (for free). If man B was 20% body fat (same body weight) and we suppose that he therefore 5% more muscle to consider than man A, then his fat is burning 80 calories per day, but that 5% extra muscle is burning 50 calories per day. Man B, edges out man A by 30 calories as a result of having more muscle mass. This is very rough, of course, but gives you the general idea of the order of magnitude of the benefit. The Katch-McArdle formula uses lean body mass as a variable, which gives acknowledgement to this affect. 

There is some evidence that BMR correlates with the size of certain internal organs (e.g., this study on voles). This information is not generally available for people trying to loose a few pounds and so is impractical, but it does have the interesting benefit of being a possible, if partial, explanation of what specifically may be different about individuals who burn calories at a higher rate--more specific that the vague explanation: "genetics".

My BMR is about 2030 calories based on standard online calculators.

Activity Level

A lot of things seem to have an impact, at least, theoretically, on how many calories you actually burn in a day. Viewed as a percentage increase to your BMR, your activity level seems to be the biggest impact. (Of course, your BMR is already the largest total impact.) As this study abstract seems to show, the impact of exercise on your resting metabolic rate is still something of a quandary to practically calculate today. One of the standard ways of adjusting the BMR to yield an overall calorie burn is to use a multiplier. Here is one of the standard statements of these common multipliers (found here):

Activity Level Multiplier
Sedentary 1.2
Lightly active 1.375
Moderately active 1.55
Very active 1.725
Extra active 1.9

You can find suggestions for what constitutes the various levels of activity, but some of the suggestions appear dubious. For example, does my frequency of exercise activities in a week trump the fact that I work a desk job? If you do Cross-Fit 3 or 4 days a week and work a desk job, is that a 1.55 or 1.725? If you are a teacher and spend a lot your day on your feet, but do zero "exercise" does that justify 1.375?

Specific Activities

Suppose I think I'm a 1.55 multiplier from my overall activity level standpoint. Should I only log activities that are above and beyond my regular activity level, or consider all of it (if possible)? If I log all of my activities' estimated calorie burn, am I double dipping (numerically speaking)?

How Much You Eat

Now of all the consternating realities to deal with in when trying to lose weight, the fact that it has a feedback mechanism in it, is somewhat irritating. That's not to say that isn't a useful and necessary mechanism. If you are running at a calorie deficit, your body will get used to this fact and bump down the rate of calorie burn (see here). You can imagine a group of people (cells) evaluating the amount of food coming into a compound (body) and looking at each other uneasily when the amount of food is less than they were used to. "We better slow down a bit," they say to each other, "We need to conserve food." 

An interesting suggestion to deal with this phenomenon is to confuse the cells/body. Take a break from the diet for a little while to put the body at ease again to limit the degree to which phenomenon interferes with your weight loss plan. The look again at each other and say, "Maybe we were hasty. Let's go back to normal. Plenty of food coming in now."

The reverse appears to be true as well. Overeating causes the cells to burn calories somewhat faster. Not necessarily enough to justify the overeating! (So, don't get smart with me.)

The composition of macro nutrients in the calories you eat is also important. A recent Canadian study reported on by Global News had some men taking in very high protein (240 g for men who are on average 225 lbs) and, together with an intense training program had the effect of increasing the rate of weight loss (fat) and also allowed for the gaining of muscle mass at the same time. The program sounds brutal. I'm intrigued. Here's another interview done by CBC on the same study (or studies): How to lose weight and gain muscle — fast: new McMaster study

Take With a Grain of Salt

A statement in the Wikipedia article on the Harris-Benedict equation gives a reference to about a 200cal uncertainty in the calculated values of BMR. That is probably about 10% for most people. That isn't necessarily a deal breaker, but maybe a better way forward is to start logging your calories and see where they are at.

Before you modify your caloric intake, spend a while logging what you eat and see how many calories that puts you at. If your activity level is consistent on a weekly basis, you can pretty well see what happens over a month if you log your food intake, weight, and waist measurement. This way, after a month you can look at the overall trends. If your weight and waist were staying roughly constant and your caloric intake was consistent, then you have probably found your maintenance level calorie intake for your current level of activity.

I work a desk job, but I exercise 5 to 6 days a week, including strength training and a bit of cycling. I'm guessing a 1.5 multiplier is reasonable for me for maintenance. 

Wednesday, December 20, 2017

Equation of Circle in 3D and Snap Tangent

For a time I was on a bit of an AutoCAD like calculation kick and went through some interesting calculations like snap tangent, snap perpendicular, and intersection of a line with a plane. I wanted to take the next step on snap tangent and consider snap tangent to a sphere.

Snap tangent means, start with some point and try to find a point on the destination object (circle, sphere, ellipse, anything) that causes the line segment between the first point and the second point to be tangent to the object selected. My post about snap tangent, showed the result for a circle and worked in 2D. If you get the concept in 2D and are ready to take the concept to 3D for a sphere, you probably recognize without proof that the set of possible points on the sphere which will result in a tangential point, forms a circle. You can pretty much mentally extrapolate from the following picture which a repeat from the previous post.

Fig. 1 - Can you imagine the snap tangent circle on this if we allow the figure to represent a sphere? The snap tangent circle has center \(E\) and radius \(a\).
I will not repeat the method of calculation from the previous post but will simply observe that we can produce the values \(E\) and \(a\), which are the center and radius of the snap tangent circle. We add to this the normal of this circle, \(N = A - C\), and then normalize \(N\).

So, now we need a way to describe this circle which is not too onerous. A parametric vector equation is the simplest expression of it. We take our inspiration from the classic representation of circles in parametric form, which is
\[ p(\theta) = (x(\theta), y(\theta)) \] \[ x(\theta) = r \cos{\theta} \] \[ y(\theta) = r \sin{\theta}. \]
We have a normal from which to work from, but we need to have an entire coordinate system to work with. The classic parametric equations have everything in a neat coordinate system, but to describe a circle that's oriented any which way, we need to cook up a coordinate system to describe it with. Observe a change to the foregoing presentation that we can make:
\[ p(\theta) = r \cos{\theta} (1, 0) + r \sin{\theta} (0, 1) = r \cos{\theta} \vec{x} + r \sin{\theta} \vec{y}\]
The normal vector we have is basically the z-axis of the impromptu coordinate system we require, but we don't have a natural x-axis or y-axis. The trouble is there are an infinite number of x-axes that we could choose, we just need something perpendicular to \(N = (n_x, n_y, n_z).\) So, let's just pick one. If I take the cross product between \(N\) and any other vector that isn't parallel to \(N\), I will obtain a value which is perpendicular to \(N\) and it can serve as my impromptu x-axis. To ensure I don't have a vector which is close to the direction of \(N\), I will grab the basis vector, which is the most "out of line" with the direction of \(N\). So, for example (F# style),

        let b = if abs(normal.[0]) <= abs(normal.[1]) then    
                    if abs(normal.[0]) <= abs(normal.[2]) then
                        DenseVector([| 1.0; 0.0; 0.0 |])
                    else
                        DenseVector([| 0.0; 0.0; 1.0 |])
                else
                    if abs(normal.[1]) <= abs(normal.[2]) then
                        DenseVector([| 0.0; 1.0; 0.0 |])
                    else
                        DenseVector([| 0.0; 0.0; 1.0 |])

In this case, I have 
\[\vec{x} = b \times N\] \[\vec{y} = N \times \vec{x}.\]
Using this impromptu coordinate system, I can express an arbitrary circle in parametric form, having center \(C\) and radius \(r\) as
\[ p(\theta) = C + \vec{x} r \cos{\theta} + \vec{y} r \sin{\theta}.\]
Thus, our snap tangent circle is given from above as 
\[ p(\theta) = E + \vec{x} a \cos{\theta} + \vec{y} a \sin{\theta},\]
where we would use \(N = A - C\) (normalized) to be the beginning of our coordinate system.