I use the PyData stack (ie numpy, pandas, matplolib & co) pretty much all the time now and I am very happy with it.

Two other key components are [Jupyter Notebooks](http://jupyter.org/) and [Anaconda](https://www.continuum.io/downloads).
Jupyter provides Mathematica like notebooks and Anaconda is a package management system that makes easier to stay out of dependency hell.

Jupyter Notebooks, originally called IPython Notebooks, are what I used to create stuff for John's NIPS talk on climate networks.
There is a fair bit of enthusiasm around using Jupyter for improving the reproducibility and accessiblity of scientific research.

Other math/science/data oriented Python tools of note

* Scikit - machine learning
* Scikit-image & PIL/Pillow - image processing
* [Blaze](http://blaze.pydata.org) - data transformation pipelines & simplified interactions with various data stores
* Bokeh - Interactive web visualizations
* Sympy - symbolic algebra (also see Sage)
* Numba - a very easy to use JIT compiler (just import it and put @jit annotation on functions you want compiled)

and for dealing with genuinely big data there is PySpark (also something called Ibis that I have not tried yet)

I think there languages/systems that do individual things better than Python, but using the complete system is hard to beat.
I still use R for some things, mainly initial explorations. but increasingly I just live in Python - and of course Emacs ;).