On September 18 and 19, 2015, the Data Structure for Data Science workshop gathered at UC Berkeley's BIDS [Berkeley Institute for Data Science]. It was a productive two days of presentation, discussion and working groups — a collaborative effort aimed at expanding what data science can do.
Despite having mostly Python developers, the workshop reached out and included members from many other programming communities (e.g., C, C++, Julia, R, etc.) as the workshop's explicit goal was to improve cross language operability. In particular, the goal was to enable python's scientific computing tools (numpy, scipy, pandas, etc.) to have a consensus backbone data-structure that would enable easier interaction with other programming languages.
Out of the discussion arose a topic that has long plagued the python community at large: code that requires legacy Python 2.7 is holding back the development of data-science toolsets and – by extension – the progress of data science as a whole. Python 2.7 was an important part of the history of scientific computing, but now it should be left as part of that history. Thus, we convened a small working group to plan a early death for Legacy Python.
Move over Legacy Python once and for all.
In collaboration with many developers among whom @jiffyclub, @tacasswell, @kbarbary, @teoliphant, @pzwang, @ogrisel, we discussed different options to push Legacy Python more or less gently through the door. We understand that some people are still requiring the use of Legacy Python in their code base, or the use some libraries which are still only available on Legacy Python and we don't blame them. We understand that Legacy Python was a great language and that it's hard to move over it. Though the retirement of Legacy Python is 2020, you will ave to make the transition then, and it will be even harder to transition at that point.
So what are the step we can do to push the transition forward.
Choose your words.
The choice of words you make on the internet and in real life will influence the vision people have for Legacy Python vs Python. Assume that Python 3 is just Python, and refer to Python 2 as legacy python. IDEs and TwitterSphere is starting to do that, join the movement.
Refer to Legacy Python in the past tense. It will reinforce the old and deprecated state of Legacy Python. I still don't understand why people would like to stay with a language which that many defects:
- it did not protect you from mixing Unicode and bytes,
- tripped you with integer division
- did not allow you to replace the printing function
- had a range object which is not memory efficient
- did not permit to re-raise exception
- had a bad asynchronous support, without yield from
- forced you to repeat the current class in
- let you mix tab and space.
- did not support function annotations
Legacy Python was missing many other feature which are now part of Python.
Do not state what's better in Python 3, state that it was missing/broken in Legacy Python. Like the missing matrix multiplication operator was missing multiplication operator. Legacy Python was preventing people to use efficient numeric library which are relying on the numerical operator.
Don't respond to "and on Python 2"
Personally during talks I plan to not pay attention to question regarding legacy Python, and will treat questions such questions as someone asking whether I support windows Vista. Next question please. The less you talk about Legacy Python the more you imply Legacy Python is not a thing anymore.
Drop support for Legacy Python (at least on paper)
If you a library author, you have probably had to deal with user trying your software on Legacy Python, and spend lot of time making your codebase compatible with both Python 3 and legacy Python. There are a few step you can take to push user toward Python 3.
If a user is not willing to update to a new version on Python, and decide to stay on legacy Python, they can most likely pin the version on your library to versions which support Legacy Python.
Make your examples/documentation Python 3 only
Or at least do not make effort to have examples that work using Legacy Python.
Sprinkle with function annotation, and
await keyword can help with
communicating your example are Python 3 only.
You can even avoid mention of Legacy Python in your documentation and assume your users are using Python 3, this will make writing documentation much easier, and increase the chances to get examples right.
Ask user to reproduce but on up-to-date Python version.
Have you ever had a bug report where you ask users to upgrade your libraries dependencies ? Do the same with Python. If a user make a bug report with Python 2.7 ask them if they can reproduce with an up-to date version of Python, even if the bug is obviously from your side. If they really can't upgrade they will know, if they do and can reproduce, then you'll have at least converted one user from Legacy Python (and in the meantime you might have already corrected the bug).
Defer 2.7 support to companies like Continuum
This is already what Nick Coghlan recommands for Python 2.6, and that's what you can do for Legacy Python fix. If you have a sufficient number of user which are asking for 2.7 support, accept the bug report, but as an open source maintainer do not work on it. You can partner with companies like Continuum or Enthought, from which user would "buy" 2.7 support for your libraries, in exchange of which the Companies could spend some of their developer time fixing your Legacy Python bugs.
After a quick discussion with Peter Wang, it would be possible, but details need to be worked on.
Make Python 3 attractive
Create new features
Plan you new features explicitly for Python 3, even if the feature would be simple to make Legacy Python compatible, disable it on old platforms, and issue a warning indicating that the feature is not available on Legacy Python install.
You will be able to use all the shiny Python features which are lacking on Legacy Python like Unicode characters !
Create new Python packages
Make new packages Python 3 only, and make all the design decision you didn't do on your previous package. Pure python libraries are much easier to create and build once you are not hold back by legacy Python.
Helping Other project
Despite all the good will in the world the Migration path from Legacy Python can be hard. There are still a lot of things that can be done to help current and new project to push forward the adoption of Python.
Make sure that all the project you care about have continuous integration on Python 3, if possible even the documentation building done with Python 3, help to make Python 3 the default.
With continuous integration, check that your favorites projects are tested on
Python Nightly, most CI provider allow the tests to be ran on nightly, but do
not make the status of the project turn red if the test are failing. See
on Travis-CI for example.
Porting C-extensions, move to Cython
The path to migrate C-extension is not well documented, the preferred approach is to use CFFI, but there is still alack of well written centralised, document on how to integrate with Python 3. IF you are knowledgeable on this domain, your help is welcomed.
The things we will (probably) not do.
Make a twitter account that shame people that use Legacy Python, though we might do a Parody account which say funny things, and push people toward Python 3.
Slow code on purpose and obviously on Legacy Python:
import sys if sys.version_info < 3: import time time.sleep(1)
Though it would be fun.
Ask user at IDE/CLI startup time if they want to upgrade to Python3:
$ python2 Python 2.7.10 (default, Jul 13 2015, 12:05:58) Type "help", "copyright", "credits" or "license" for more information. Warning you are starting a Legacy Python interpreter. Aren't you sure you don't want not to upgrade to a newer version ? [y]:_
Delay Legacy Python packages releases by a few weeks to incentive people to migrate, or should we actually consider the people on Python 2 as guinea pig and release nightly ?
Remember, Legacy Python is responsible for global warming, encourage people to stay with IE6, and is voting for Donald Trump.
If you have any ideas, please send me a Pull Request, I'll be happy to discuss.
As usual my English is far from perfect, so Pull Request welcomed on this blog post. Thanks to @michaelpacer who already did some rereading/rephrasing of first draft.
As you may know we have an Jupyter logo, as an SVG. But as for the Kilogram that as a prototype kilogram that act a reference, we suffer from the fact that the logo does not have an abstract description that could explain how to construct it, which is bad.
So with a little bit of reverse engeniering and Inkscape I was able to extract some geometric primitive to build the jupyter logo.
Still work that need to be done, especailly for the gradient to get the endpoint and the colors.
But this allow to do some nice things like plotting the logo in matplotlib :-)
from matplotlib.patches import Circle, Wedge import matplotlib.pyplot as plt
def Jupyter(w=5, blueprint=False, ax=None, moons=True, steps=None, color='orange'): ## x,y center and diameter of primitive circles, ## than to Inkspace. xyrs = numpy.array(( (315, 487, 406), (315, 661, 630), (315, 315, 630), (178, 262, 32), (146, 668, 20), (453, 705, 27), )) center = (315,487) xyrs[:,0] -= center xyrs[:,1] -= center if not ax: fig,axes = plt.subplots() fig.set_figheight(w) fig.set_figwidth(w) else: axes = ax axes.set_axis_bgcolor('white') axes.set_aspect('equal') axes.set_xlim(-256,256) axes.set_ylim(-256,256) t=-1 ec='white' if blueprint: ec = 'blue' for xyr in xyrs[:steps]: xy,r= xyr[:2],xyr if r == 630: fill=True; c = 'white' else: fill=True c=color if r==630 : a = 40 axes.add_artist(Wedge(xy,r/2,a+t*180, 180-a+t*180, fill=fill, color=c, width=145, ec=ec)) t = t+1 elif r==406 : axes.add_artist(Circle(xy,r/2, fill=fill, color=c, ec=ec)) else: if r<100 and moons: axes.add_artist(Circle(xy,r/2, fill=fill, color='gray', ec=ec)) if blueprint: axes.plot(xy,xy,'+b') axes.xaxis.set_tick_params(color='white') axes.yaxis.set_tick_params(color='white') axes.xaxis.set_ticklabels('') axes.yaxis.set_ticklabels('') return axes
ax = Jupyter(10, blueprint=False)
And lets make the outline apparent:
ax = Jupyter(10, blueprint=True)
Color and gradients are not right yet, but looks great !
for small graphs you can also remove moons.
fig, axes = plt.subplots(2,3) for ax in axes.flatten(): Jupyter(ax=ax, moons=False)
If you run it locally, below you will have an interactive version that will show you the differents step of drawing the logo.
from IPython.html.widgets import interact cc = (0,1,0.01) @interact(n=(0,6),r=cc,g=cc, b=cc) def fun(n=6, outline=False, r=1.0, g=0.5, b=0.0): return Jupyter(steps=n, blueprint=outline, color=(r,g,b))
As usual PR welcommed !
One of the annoying things when I want to post a blog post like this one, is that I never remember hom to deploy my blog. So, why not completly automatize with a script ?
Well, that one step, but you know what is really good at runnign scripts ? Travis.
Travis have the nice ability to run script in the category
after_success , or encrypting file, whice allow a nice deployment setup.
The first step is to create an ssh key with empty pass passphrase;
I like to add it (encrypted) to
.travis folder in my repository.
Travis have nice doc for that.
Copy the public key to the target github repository deploy key in setting.
In my particular setup the tricky bit where :
To get IPython and nikola master:
- pip install -e git+https://github.com/ipython/ipython.git#egg=IPython
- pip install -e git+https://github.com/getnikola/nikola.git#egg=nikola
Get the correct layout of folders:
- blog (gh/carreau/blog)
- posts (gh/carreau/posts)
- output (gh/carreau/carreau.github.io)
I had to soft link
posts, as this is the repo which trigger travis build, and is by default cloned into
~/posts by travis.
carreau/carreau.github.io is clone via
https to allow pull request to build (and not push) as the ssh key can only be decrypted on master branch.
after_success step (you might want to run unit-test like non-broken link on your blog) you want to check that you are not in a PR, and on master branch before trying to decrypt the ssh key and push the built website:
The following snippet works well.
if [[ $TRAVIS_PULL_REQUEST == false && $TRAVIS_BRANCH == 'master' ]];
Travis CL tool gave you the instruction to decrypt the ssh key, you now "just" have to add it to the keyring.
eval `ssh-agent -s` chmod 600 .travis/travis_rsa ssh-add .travis/travis_rsa cd ../blog/output
And add an ssh remote, which user is git:
git remote add ghpages ssh://email@example.com/Carreau/carreau.github.io
Push, and you are done (don't forget to commit though) !
When testing, do not push on master, push on another branch and open the PR manually to see the diff. Travis env might defier a bit from yours.
Nikola read metadata file from
.meta file, which is annoying. I should patch it to read metadata from notebook Metadata. ALso need a JS extension to make that easy.
PR and comments welcommed as usual.
It occured to me recently that we (the IPython team), are relatively bad at communicating. Not that we don't respond to questions, or welcomme pull request, or will not spend a coupel of hours to teach you git even if you just want to update an URL. We are bad for writing good recapitulatif of what as been done. Of course we have weekly meeting on YouTube, plus notes on Hackpad, but it's painfull to listen to, and not alway friendly.
I hence will try to gather in this small post some of the latest questions I think need to be answerd.
What is Jupyter ?¶
You might have seen Fernando's Talk t SciPy last year that introduce Jupyter. It was a rough overview, a bit hitorical and technical in some point. In even less technical word, it's anything in IPython that could apply to not Python. We are creating this to welcome non-pythonista in the environement and promote interlanguage interoperability.
Can I be part of Jupyter ?¶
Yes. Right now only IPython team are Jupyter member (we are drafting legal stuff), but if you want to be part of the steering council we'll be happy for you to join. We would especially like you to join if you have a different background than any of us. You are using Jupyter/IPython for teaching a lot ? Or Maintain a Frobulate-lang kernel ? You point of view will be really usefull !
Yes, here again we are still drafting things, but we want more visibility for sponsors and/or company that develop product that are based on JupyterWork with Jupyter and contrbute back. If you have a specific request or question, please contact us, we'd like to know your point of view !
Do I need technical expertise ?¶
No. You prefer to write blog post ? Do user testing ? Record demo on how to use a software on youtube ? that's helpful. You are designer ? you want to translate things ? That's awesome to. You like doing paperwork and challenging administrative task ? We can probably ask you how to mound a non-profit organisation to organise all that. You don't really know anything but understand a bit who deals with what on te project ? Your help will be helpfull to assign people tasks on the project ! You write a better english than I do ? Great, english is not my first language and I definitively need someone to point out my mistakes.
When will be Jupyter first public release ?¶
Well technically Jupyter is not only a project but an organisation. We already have a long list of project you might have heard of.
If you are reffering to The Notebook, then current IPython master is what is closer to Jupyter 1.0.
IPython Notebook 4.0 will probably not be as itmight be name Jupyter 1.0.
IPython will disapear ?¶
No it will not. IPython the console REPL will stay IPython. the IPython kernel will probably become a separate project call IPython-kernel that will be usable in IPython notebook. Basically most of the feature of IPython (traitlets, widgets, config, kernel, qtconsole...) woudl disapear from IPython repo.
You are removing Traitlest/Widgets/Magics.... ?¶
Not quite, we will make them separate repository, sometime under Jupyter organisation, sometime under IPython.
For exaple the QtConsole is not Python specific (well, it is written in Python), but can be used with Julia, Haskell... Thus it will be transfered to Jupyter org where it will have its own dedicated team and release schedule. This will allow quicker release of versions, and easier entry to contributor to get commits right. the team in charge will also have more flexibility on the feature shipped. Right now we would probably refuse a qtconsole background beeing a pink fluffy unicorn, once split is done the responsible team can decide to do so.
Traitlets in the other hand (that are use from config to widgets) will be under IPython organisation as they are a specificity from IPython kernel. They could be used in matplotlib to configure loads of things. Once split we will probably add things like numpy traitlets, and a lot of other things requested.
IPython will probably start depending on (some of) these subpackages.
Can you describe the future packages that will exists ?¶
Roughly will have a package/repo for:
- The Notebook
- maybe even 2 packages, 1 for Python side, other for JS Side
- IPython as a kernel
- IPython as a CLI REPL.
Splitting this will moslty be the work we will do in betwwen IPython 3.0 and Jupyter 1.0
Can you give a timescale ?¶
IPython 3.0 beta early February, release end of February, Jupyter 1.0 three Month Later. If we get more help from people, maybe sooner.
What about new features ?¶
Starting with IPython 3.0, you can have multi-user server with Jupyter-hub. I, in particular, will be working on real-time colaborationand integration with google Drive (think of something like google doc). As it will not be Python specific, it will be under Jupyter umbrella.
Only google Drive ?¶
Google gave us enough money to hire me for a year. I'll do my best. With more money, more people we could go faster, have more backend. Developpent is also open, and PR, and testing welcommed !
It would be awesome if something like IPython existed for
<a language> !¶
Well, it's not a question but It's called Jupyter. Please don't reinvent the wheel, comme talk to us, we would love to have interoperability or give you that hook you need. Who knows you can help steering the project in a better direction.
Since Last post lots of things happend, I had 2 pull requests on github to fix some typos on previous posts (now updated). We had yet another Jupyter/IPython developper meeting in Berkeley. By the way, part of IPython will be renamed Jupyter. See Fernando Perez Slides and Talk. I got my PhD on my spare time when not workin on IPython. Went to EuroSciPy, hacked in London In Bloomberg Headquarters for a week-end organized by BloomberLab, got a Post Doc at Berkeley BIDS under the direction of above cited IPython BDFL and is about to move almost 9000 km from my current location to spen a year working on improving your everyday scientific workflow from the West of United-States, I think that's not too bad.
Since I got a little more time than usual to do some python everyday since this is officially my day job I was confronted to a few pyton problems, and also sa some amazing stuff.
Eg, James Powel Ast Expression hack of CPyhton: (original tweet)
>>> e = $(x+y) * w * x >>> e(x = $z+10, w = 20)(30, 40) (×, [(×, [(+, [(+, [40, 10]), 30]), 20]), (+, [40, 10])]) >>> _() 80000
Give you unbound python expression that you can manipulate as you like. Pretty awesome, and would allow some neat meta programming. Woudl be nice if he could release his patch so that we could play with it.
A discussion with Olivier Grisel (sikit-learn) raised the point that a good package to deal with deprecation would be nice. Not just deprecation warning, but a decorator/context manager that takes as parameter the version of the software at which point a feature is removed and warn you that you have soem code you can clean.
Using grep to find deprecation warning don't always work, and sometime you would like a flag to just raise instead of printing a deprecation warning, or even do nothing when you are developpin aginst dev library.
Jupyter/IPython 3.0 is on its way. Slowly, we are late as usual.Recap will come at the right time.
I am working on integratin with Google Drive, and will try to add real-time synchronisation to Jupyter 1.0/IPython 4.0. IWe plan on addin other Operational Transform library backend, but workin on Google drive first got me my post doc. (Yes Google signed a Check to UC Berkeley)
Of course new project mean more packagin, and as you all know Python packaging is painfull. Especially when starting a new project you need to get all the basic files...etc.
Of course you can use project like audrayr/cookiecutter, but the you stil have the hassle to :
- set up the GitHub repository,
- Find a name
- Hook Travis-CI find the annoyin setting that let you trigger build.
- Hook up Readthedoc,
- ... 
- And so on and so forth
 add everything that every project should do.
I stronly believe that intelligence shoudl be in the package-manager, not the packager, and my few experiences with julia
Pgk, and a small adventure with nodejs
npm convinced me that there is really something wrong with Python packagin.
I'm not for a complete rewrite of it, and I completly understand the need for really custom setup script, but the complexity just to create a python package is too high. SO as a week-end hack introducing...
So yes I know that cookiecutter as pre- and post- hooks that might allow you to do that, but that's not the point. I just want a (few) simple command I can remember and which can do most of the heavy liftin for me.
I shamelessly call it
PipCreate, because I hope that at some point in the future you will be able to just do a
$pip create to enerate a new package.
So to work beyond dependencies, it just need a GitHub Private token (and maybe that you once loin to TravisCI, but haven't tested). The best thin is, if you don't have github token availlble, it will even open the browser for you at the right page. Here are teh step:
$ python3 -m pipcreate # need to create a real executable in future
Now either it grabs the github token from your keychain, or ask you for it and then does all the rest for you:
$ python3 -m pipcreate will use target dir /Users/bussonniermatthias/eraseme Comparing "SoftLilly" to other existing package name... SoftLilly seem to have a sufficiently specific name Logged in on GitHub as Matthias Bussonnier Workin with repository Carreau/SoftLilly Clonning github repository locally Cloning into 'SoftLilly'... warning: You appear to have cloned an empty repository. Checking connectivity... done. I am now in /Users/bussonniermatthias/eraseme/SoftLilly Travis user: Matthias Bussonnier syncing Travis with Github, this can take a while... ..... syncing done Enabling travis hooks for this repository Travis hook for this repository are now enabled. Continuous interation test shoudl be triggerd everytime you push code to github /Users/bussonniermatthias/eraseme/SoftLilly [master (root-commit) f4a9a5d] 'initial commit of SoftLilly' 28 files changed, 1167 insertions(+) create mode 100644 .editorconfig create mode 100644 .gitignore create mode 100644 .travis.yml create mode 100644 AUTHORS.rst create mode 100644 CONTRIBUTING.rst create mode 100644 HISTORY.rst create mode 100644 LICENSE create mode 100644 MANIFEST.in create mode 100644 Makefile create mode 100644 README.rst create mode 100644 docs/Makefile create mode 100644 docs/authors.rst create mode 100755 docs/conf.py create mode 100644 docs/contributing.rst create mode 100644 docs/history.rst create mode 100644 docs/index.rst create mode 100644 docs/installation.rst create mode 100644 docs/make.bat create mode 100644 docs/readme.rst create mode 100644 docs/usage.rst create mode 100644 requirements.txt create mode 100644 setup.cfg create mode 100755 setup.py create mode 100755 softlilly/__init__.py create mode 100755 softlilly/softlilly.py create mode 100755 tests/__init__.py create mode 100755 tests/test_softlilly.py create mode 100644 tox.ini Counting objects: 32, done. Delta compression using up to 4 threads. Compressing objects: 100% (25/25), done. Writing objects: 100% (32/32), 13.41 KiB | 0 bytes/s, done. Total 32 (delta 0), reused 0 (delta 0) To firstname.lastname@example.org:Carreau/SoftLilly.git * [new branch] master -> master
It finishes up by openin travis on the test page after a push so that you can see the first test passin after a few seconds.
Hope that will give you soem ideas and patch welcomes.
Almost two weeks ago was the beginning of PyCon 2014, I took a break durring the writting of my dissertation to attend, end even if I was a little concerned because it was my first non-scientific-centric Python conference that I attend, I am really happy to have made the choice to go.
I'll write what hopefully will be a (short) recap of (my view) of the conference. Maybe not in a perfect chronological order though.
So this year, PyCon Us was... in Montreal Canada, 7 hour flight from where I leave, in a seat narrower than me, and I'm not that wide. Hopefully there was video on demand. I watched Gravity, but I must say I was really desapointed. Beeing a physicist, I saw a lot of mistakes that ruinded the movie. Guys, it is worthless to program an engine that simulate physics if you gave it the wrong rules. From basic physics classes you will learn the following :
- Angular momentum conservation also apply in space.
- There is no buoyancy convection, hence no "flame" like the one on earth.
- Usually the law of physics don't change with time.
Landing in Canada, Custom were more thorough than US custom. They actually searched what PyCon was, and asked me question about it.
Going from Airport to Center Montreal was quite easy, direct by bus 747, finding the right stop was harder has the bus driver was announcing stop number, and not name of the station, and no map was availlable in the bus.
Hotel was a block away from conference center (but not that easy to find when you enter by the given street adress and not the main entrance).
So I was finally settle around 18 hours before the beginning of the tutorials, tweeted to know if anybody was around, meet with Agam, get dinner, sleep.
Durring the first day I was able to attend two tutorials, which I both enjoyed, and allowed me to adjust to the right timezone. 1h lunch is just enough to meet with people and discuss, hopefully you get (some) time to discuss and meet people in the evening.
I catched up with Fernando who was giving the IPython tutorial the next day in the morning.
Btw, do not forget to update your phones app before leaving the country, especially the one you want to use that require textto send you a text when you are abroad and do not get roaming.
The IPython tutorial on thursday went great, except some really weird installation issues where :
Anaconda did refuse to install because it is already installed but nowhere on the hardrive, and in the end, you force miniconda install and do all the dependency by hand.
Of course you brought Pendrives to avoid user to download the 300Mb install on conference wifi, but you forgot the Win-Xp-32bit installer.
I was really suprised to see people still using IPython 0.12, and modal interface seem to make sens for people having never used IPython.
One thig that really make PyCon so great, (and the python comunity in general are the people) I basically missed many tutorial and talk during the week only because of discussing with people. I must that I learned a lot by discussing with Nick Coghlan and Fernando Pérez (ok, as a non native english I might have been really quiet, but learned a lot, and I'm gettign better, I'm starting to hear people accents).
If you want something on python core, you need to be persistant. By default, anything you will propose to the CPython core comunity/PSF will be refused, but you need to perseverate. If you perseverate then you show the community that you are ready to put time into supporting what you are proposing.
Also keep in mind that even if there are obvious issues (haem packaging) people are working on it, you probably don't know the implication :
So please either be patient, or help them with the most basic thing missing : documenting ! Has seen later in the conference the CPython/packaging/docummenting/pip that once were difficult to distinguish becomme separate spaces, where for now people tend to be the same, but things are shifting.
Becomming member of the PSF is now also open to everybody, so you can know get even more involve into Python.
Even after missing one tutorial afternoon can get busy, volunteers packed stuff in bags for the (real) oppenig of the conference the day after. It is also the time were sponsors are starting to set-up their booth and were you can talk to people before the big affluence. Usually the time also to finally see in flesh and blood people you only know by their twitter/github/... handle. Often it even takes you a day or two find out you already know each other.
Unlike other meals, there was (a limited number per person of) free beer, still found a little annoying that you had to pay for soft drink afterward. It is also the right time to look for the different competions around the different booth to win stuff. This goes from Linked-in that has this informal "do this puzzle as fast as you can", to Google were you actually had five minutes chrono to program an AI for your snake in a "Snake Arena" game.
Reception ended pretty early as teh conference had not already started, but if you plan on arriving the day before, it is a must attend if you want to start meeting with people and start adjusting to the timezone when you are flying from further west.
Still empty stomach (except for 3 beers), I met with Jess Hamrick (which describe her version of PyCon here) to eat dinner, before crashing to sleep. I won't repeat was jess has said as she is much more skillful with words than I am.
I really appreciated the morning during the main conference as the breakfast was served in the convention center, which allow you to socialize early in the morning.
from IPython.display import Image Image('img/breakfast.jpg', width=600)
As you can see there is room for plety of people !
I wont spend too much time on the keynotes and talk that were presented, they are already all uploaded on PyVideo / Youtube and I must say that the PyCon team in charge of filming/organising the events did a pretty awesome job !
In general I felt like PyCon was really different from SciPy first by the number of people that are present which is huge :
from IPython.display import Image Image('img/crowd.jpg')
Main conference room before keynote (stolen from twitter at some point, if you re-find the author plese tell me so that I could update the post)
Also unlike SciPy, PyCon is much less axed toward science, which both gave me the opportunity to discover lots of project and found out that in the end, IPython is not that used and known. Having been once in Texas and once in Canada, in my own experience conference food during break is alway better in Austin than in Montreal.
In evening we either went to restaurant, or to party organised by Speakers or Sponsors. Note that even if it is great to have open bar (and not cheap alchool, but nice wine) and unlimited cake, I felt like real food was missing.
When going to another continent, I usually prefer to optimise and stay as much as I can, hence I also stayed for sprint (I'll write my PhD dissertaion later). In the other hanf Fernando Perez is a Busy man and only was present for the first day and a half. Hence we were not supposed to have a real IPython sprint for the first day.
So we arrived at the sprint on monday morning off the cuff, ready to have only a few people for the sprint. To our surprise after 1h, we were around 20 around the table with our number growing. We were more and more in need for a blackboard to a globab explanation of the IPython internal. Hopefully our hacker spirit lead us to the back of a pycon sign we steal and use the back of the blackboard: