ellipsix informatics

Ellipsix Informatics: the personal website and blog of David Zaslavsky.

I'm a graduate student styudying theoretical particle physics, and I also do a lot of computer programming. Find me elsewhere online:


First steps toward new scicomm conferences

Join the Google group mailing list to stay informed or to help with planning!

My post last week considering options for a new science communication conference series got a pretty strong response, at least relative to most things on this blog. As it turns out, there already are some people in various stages of planning new (un)conferences in the style of Science Online, much like what I was thinking about. I won't say anything about them here because those people haven't revealed their plans yet, but I hope they will go public soon!

I also completely forgot that Science Online was not monolithic; it had regional branches around the US and around the world, which were largely separate from the main organization. At least two of them are still holding events: Science Online Leiden and Science Online DC. (There are also branches in Boston, Denver, and Vancouver, maybe others that I don't know about, but they seem to be inactive.) These smaller groups could play a big role in the future of the science communication community, since as several people have pointed out, it's a lot easier to organize events that involve fewer people. Perhaps a big international conference is too much to plan for in the next couple of years, but bringing together communicators from a couple of neighboring states? Not so hard. If there's no science communication group in your area, why not start one? If you do, it'd be an excellent thing to announce on the mailing list!

In fact, the same goes for topical conferences, like the massively successful Science Online Oceans. Again, a smaller conference is easier to organize, and it could build up over time to become something for the whole community.

Of course, there's no reason for me to only be writing about Science Online affiliates. I just do it because those are the events and groups I know, or can easily find out about. Actually, a lot of people I've heard from think that we should see any new conference, not as replacing Science Online, but as an opportunity to construct an event that the science communication community wants, from the ground up. I agree. After all, Science Online had its share of problems; the brand is somewhat tarnished, and any new events would probably do well to set themselves apart from that history.

Toward a new conference

While other people pursue their plans for new conferences, I've been musing on the seven-step "plan" (if you can call it that) I laid out in my earlier blog post. Here are some thoughts on the early steps, in light of what people have told me in the past week:

  1. Putting together a group with organizational experience: the Science Online "regulars" were no strangers to organizing events. After all, if you want to communicate with people, bringing the people to you is step 1. So the talent and the experience are out there. I've actually been in touch with several people who would be very capable of planning a new conference, once they decide it's time to go ahead and do it.
  2. Figure out what went wrong with Science Online: a lot of things. Here's a (partial) list, in fact. Here's another one. But this step is ongoing.
  3. Gauge interest: Yes, people are interested. Maybe not all the same people who used to regularly attend Science Online events, but a lot of them are interested enough that — as I mentioned above — they were talking about plans for some kind of new event even before my first blog post on the matter. The trick seems to be putting the interested attendees in touch with the interested organizers, which is what I'm trying to do right now.

The rest of the details — time, location, content, name, sponsors — is stuff for the future. For now, I think it's all about communication. So, whether you're interested in planning a conference or just want to be kept up to date on what everyone else is doing, please, join the mailing list!


Adventures in China: the toys of the trade

My boss got me a new toy today.

it's a Mac!

This is one of the perks of working for a well-funded research group, I guess. And a new research group. It's not often that you get your foot in the door right when they're buying equipment.

It's also a perk of being a phenomenologist (which is like being a theorist but sometimes we measure things we can't calculate). Unlike experimental physicists, who have to spend their budgets on all sorts of exotic lab equipment (which I'm given to understand means obscene amounts of duct tape and aluminum foil), all you need for phenomenology is a computer, pencil and paper, and a place to sit. So there's really no reason not to blow as much money as possible on nice equipment. And this is nice equipment. It's literally the best Apple computer you can buy over here, featuring a 27 inch display (oooooh) and OS X Yosemite, the newest update to the operating system.

Not that I don't have reason to complain. The system stalled twice before I even managed to finish the setup procedure.

Eternal Flame

I guess I have to start making offerings to the Apple Gods now? Or the spirit of Steve Jobs?


On compiler warnings (and off them, too)

Quick, what's wrong with this C++ program?

#include <iostream>

int test(int arg) {
    cout << arg << endl;

int main(int argc, char** argv) {

Did you guess nothing at all? Because that's what GCC says:

$ g++ -o funnyprogram funnyprogram.cpp


Pretty much every other programming language that makes you explicitly identify a function's return type will also make you actually return something from that function. C and C++ don't, and furthermore GCC doesn't even warn you that anything is wrong. This can occasionally lead to serious bugs, as I discovered today in this real-world example. I had a function that checks the name of an object and returns an enum value based on that name.

virtual const HardFactorOrder get_order() const {
    // relies on a particular convention for get_name()
    // but can be overridden for hard factors where that convention doesn't apply
    std::string name = get_name();
    if (name.compare(0, 3, "H01") == 0) {
        return MIXED;
    else if (name.compare(0, 2, "H0") == 0) {
        return LO;
    else if (name.compare(0, 2, "H1") == 0) {
        return NLO;

That was all good when all the objects involved had names conforming to the convention, but my latest batch of updates to the code involve objects with totally different names, and I forgot to override get_order(). So the default implementation above was getting used. Instead of failing with an error when none of the patterns matched, it was just not returning anything, and the variable that I set the return value to was getting assigned some random binary nonsense. Something like 2692389, where the legal values were 0, 1, and 2.

Needless to say, if GCC had complained about this from the start, I wouldn't have spent at least an hour staring at tiny text in a debugger.

tiny text in a debugger

There are over a hundred warnings that GCC can be configured to emit. Some of them are relatively useless, but most probably should be enabled if you want to save yourself a lot of debugging time. I pored through the manual and came up with the following set of warnings for myself:

-Wall -Wextra -Wformat-security -Wmissing-include-dirs -Wuninitialized
 -Wtrampolines -Wconversion -Wuseless-cast -Wjump-misses-init -Wlogical-op
 -Wstrict-prototypes -Wctor-dtor-privacy -Wold-style-cast -Wno-reorder
 -Wno-unused-parameter -Werror=delete-non-virtual-dtor -Werror=return-type

That's quite a mouthful, though a build system like Cmake can handle it easily; just copy and paste into the appropriate spot in the configuration file. For times when you invoke GCC manually, you can put those options in a file, perhaps ~/Wreally-all, and then run

g++ @/home/user/Wreally-all -o program program.cpp ...

which will include the contents of the file as if you had specified it on the command line.

Feel free to use this as a starting point for figuring out what set of warnings is most useful for your own environment.



I'm having a little too much fun with my newly-discovered ability to embed Twitter widgets. Enjoy these scientific twists on popular movie titles.


Another Mathematica bug

Math is hard.

Not for Barbie, but for Mathematica.

I ran into a weird Mathematica bug while trying to evaluate the sum

\sum_{k=1}^{\infty} \biggl[-\frac{\pi^2}{6} + \psi'(k + 1) + H_k^2\biggr]\frac{z^k}{k!}

Split this into three parts. The first one is the well-known expansion of the exponential function

-\frac{\pi^2}{6}\sum_{k=1}^{\infty} \frac{z^k}{k!} = -\frac{\pi^2}{6}(e^z - 1)

The second is not the well-known expansion of the exponential function.

\sum_{k=1}^{\infty} \psi'(k + 1)\frac{z^k}{k!} \neq \frac{\pi^2}{6}(e^z - 1)

Obviously not, in fact, since if two power series are equal, \sum_i a_n z^n = \sum_i b_n z^n, for an infinite number of points, each of their coefficients have to be equal: \forall n,\ a_n = b_n. (You can show this by taking the difference of the two sides and plugging in a bunch of different values of z.)

I guess Mathematica doesn't know that.

In[1] = Sum[PolyGamma[1, k + 1] z^k/k!, {k, 1, Infinity}]
Out[1] = 1/6(-1 + E^z)Pi^2

I had my hopes up for about two days that these two terms would cancel out, but I should have gone with my instinct that something was fishy about that result.

Apparently we have to go all the way back to version 7 to get a correct answer.

I'm still working on the rest of it.

Can you trust your calculator?

This is actually emblematic of a big problem that is on many people's minds right now because of a recent article in the Notices of the American Mathematical Society, (PDF) "The Misfortunes of a Trio of Mathematicians Using Computer Algebra Systems. Can We Trust in Them?" In this paper, the authors construct some fairly simple mathematical expressions for which Mathematica gives egregiously wrong results. (The expressions may not look simple to us, but they're polynomials with integer coefficients, pretty much the easiest mathematical objects for computers to handle.)

Forget math, it's programming that's hard!

What all this shows is that relying on the results of a computer program is dangerous without some kind of independent verification that it does what you think it does. Anyone who's ever written a program should understand that. But it's all too easy to forget; we get lulled into a false sense of security by the 90% of the time that programs do work, plus the 9% of the time that they seem to work because whatever errors they produce are buried in a giant pile of output.

I think sometimes we could all do with a reminder like this not to get too trusting.