Wednesday, February 27, 2013

The Feature Class

The class that I was spending the most time on appears to be complete. All that's missing now is a lot of test cases to make sure everything is working as intended and some smoothing of the design. I'll have some sample code up shortly (hopefully tonight). I'm actually working on converting my machine learning research code to use ProtoML, so I can show a side by side comparison. Some of the cool stuff though are:
  • automated feature transforms
  • regex indexing
  • easy concatenation of data frames (still making it more elegant but it works)
  • lazy hashing for caching
  • and pretty much everything normal data frames can do

Other than that, we're still working on tests and docs, and implementing more features!

Tuesday, February 19, 2013

Designing Usability

Getting the simple stuff to work like fit(), predict(), and transform() was the easy part, we hope. Since we both want to be able to use ProtoML ourselves and also share it with the world, we are working on a ton of different usability issues right now.

Priority number one is a really good test suite (in my opinion). We have a bunch of code that we think should work, but now we need to show it works. Because of this, we spent a good part of the last week working on documentation and unit tests. The idea is that it will make it easier to develop in the long term because every new feature should only require a few extra test cases and we'll be instantly able to see if it works with no guesswork needed. Priority number two has been documentation. We have auto-docs set up, so all we need to do is comment our code nicely. We have most of the new features on pause until we have satisfactory coverage in these two aspects.

One big topic of this week was dependencies. We always wanted our library to be a sort of glue to connect a variety of other libraries, and this inevitably will lead to a lot of dependencies. I (Diogo) want to minimize dependencies by having everything unnecessary be in their own sub-modules partitioned by dependency for easy of use of the user, and Bharath wants all dependencies as a requirement so that it just works. There are obviously pros and cons of each, and we'd love to hear opinions on this.

A second big argument point was following standard python conventions, namely using a context manager to add nodes (for an example, see last post where the nodes are created). Bharath argues that it makes the code easier to make a more readable, while I argue that it makes it unconventional and thus harder for an average user to understand. Bharath thinks that there could be amazing possibilities by having all sorts of code in the context manager (for loops for example), and I agree that it looks great and follows the DRY principle. I just think that taking in a list of nodes would only take a little more code and make a lot more sense to most people. Furthermore, we are trying to decide if I should allow feature transforms to be created in the same manner. There may be a big debate soon on this...

Thursday, February 14, 2013

An Early Example

This is some alpha sample code from a user perspective on how to work with ProtoML:
This code setups a basic dataflow from going from input data to machines and then scoring. ProtoML is made to be a high-level framework to glue together different data analysis libraries. This is done through constructing nodes that act as containers for these libraries' functions and then connecting the nodes together to create a dataflow network. The nodes themselves are also relatively easy to construct, this will be featured in a later post. The ones shown above are scikit-learn container nodes.

 This example used a built-in dataset. Diogo is currently working data handling and feature transformation and that will be talked about in a blog post very soon.

Tuesday, February 5, 2013

An Introduction To ProtoML

What is ProtoML?

ProtoML is a machine learning library built on top of scikit-learn (and hopefully a few more libraries soon!) with an aim for ease of use and rapid prototyping. We are part of a Kaggle group at RPI and we were searching for easy to use machine learning libraries and frameworks to quickly hack out some data analysis. Some of our favorites include scikit-learn, Orange, and Ramp. But none of them made it really easy to get off the ground once you have some clean data. There was always some hoops to jump through to start scaling out and trying different machines and using different features. That is why we decided to create a meta-modeling machine learning framework to make it as simple as possible to chain together feature selection and machines in different combinators.

Who is behind ProtoML?
Diogo Moitinho de Almeida & Bharath Santosh! Two students from RPI with too much free time and a dream (just kidding about the free time, don’t give me more homework Prof. Goldschmidt -Diogo).

What are our goals for the semester?
  • Make the implementation of machine learning algorithms as simple to try out as possible.
  • Implement some features missing from scikit-learn that are simple yet time consuming.
  • Provide a framework for automating as much of the data analysis process as possible.
  • Have everything run fast. Do as much possible in Cython, and try to cache everything.
  • Eventually act as the glue between the wide variety of available Python machine learning libraries.
  • win a kaggle competition

Where can you learn more?
To see all that we have available and use our latest prototypes, check out:
https://github.com/CurryBoy/ProtoML (you should do it; we love guinea pigs)

What’s next for the blog?
We are going to do a combination of rough overviews of machine learning concepts and how to use them with ProtoML, and keep everyone updated with the latest and greatest features!

Minor update:
Tons of progress and we just finished our very first meeting as an official RCOS group! Yay for us! We will soon be putting up a blog by next week on sample code to run for basic machine learning learning.