Why I won't be using Microsoft.Bcl.Immutable package (despite much anticipation)

Looking at the newly released ImmutableCollection package, I see some confusing restrictions in it's licensing agreement (especially for something that must be redistributed with your application)

You may not ... work around any technical limitations in the software;

Does this mean it's illegal to use reflection to get at private bits of the library?

You may not ... use the software for commercial software hosting services.

This sounds quite scary. Does this mean I couldn't make a commercial website or SAS product using this library?

You may not ... modify or distribute the source code of any Distributable Code so that any part of it becomes subject to an Excluded License. An Excluded License is one that requires, as a condition of use, modification or distribution, that the code be disclosed or distributed in source code form; or others have the right to modify it.

Does this mean I can't use GPL licensed code with this library?

You may not ... distribute Distributable Code to run on a platform other than the Windows platform;

This looks rather obvious, but does this mean I couldn't make a website that ran on Mono and used this library, or does it only mean that I couldn't make something using Mono and then package the program and this library in a Linux debian package or something else? (ie, distribute, not just run)

This license worries me greatly about my ability to use this (much anticipated) library. These questions probably require a lawyer to fully resolve, and I'm not going to buy the time to ask one.. so, I just won't be using this library, as cool as it looks and as much as this has been anticipated for me.

Unless you have a legal team you can ask these questions, I wouldn't use this library either until Microsoft changes the license to something less hostile to developers

Also, if you want a version of this with a sane license, see the up and coming MIT-Licensed Project

Posted: 9/26/2013 4:18:27 PM

My first program

This is the oldest code I can find which once did something.

https://gist.github.com/Earlz/5848045

It's quite amazing. I can remember some of my thoughts when I wrote it. I had this huge obsession with using as few resources as possible. I had a fairly good machine for the time (512Mb of memory), but was obsessed for no good reason. Windows Media Player 9 was my media player of choice, mainly because it came with Windows. However, I considered it very heavy in resources, so I set out to use my newly honed skills to write the best media player in the world.

I even remember a tiny amount of the implementation details. I remember the reason starty wasn't start. I originally didn't think it needed to show the menu's text... but then I realized I needed to show it as it scrolled off the screen. The rest of the labels (except for l) I'm 99% sure are complete gibberish. I'm quite amazed that the variable names actually make sense though, other than st. st I can only assume was shorthand for start which for some reason translated to command.

Believe it or not, I iterated onto this and tried to actually sell the thing! I had paypal buttons and everything on a website called simple-apps.

It's amazing how naive I once was..

Posted: 6/25/2013 2:30:25 AM

The great thing about mono and C#

So, it seems like a ton of programs I've been trying to use recently don't work with the latest version of their dependencies. For instance, Metasploit doesn't work with Ruby >= 2.0. I was using some other program that required Python 2.0 rather than 3. And we've all heard the horror stories about programs requiring specific versions of Java.

I never have this problem with C# though. In theory, it could happen.. but Microsoft really likes keeping things compatible, it's their business plan. And as an extension of that, Mono never seems to flat-out drop support for anything in their core software. I have never seen a program that requires a specific version of .Net or Mono.

This is awesome! The worst thing I have to do with Mono is compile it from git so that I get pre-release (good) support of portable class libraries. Now, let's break this down. Why exactly is it like this though?

  • C#/.Net uses compiled-to IL
    • This prevents that issue of deprecating or changing misleading language features. It's all IL after compilation, so it doesn't matter
  • There is a spec for .Net. In theory, if you abide by the ECMA spec, most things should work, logic-wise
  • It's easy to take my dependencies with me with .Net

Now let's step through why this isn't easy with Ruby/Python

  • The language is improving and getting better. This can cause old programs to break, but is unavoidable with scripting languages
  • There isn't a spec that the latest version will always implement. This is probably a good thing though
  • There is a huge emphasis on not taking your dependencies with you. This leads to breaking changes in gems and such breaking your program.

What about Java? Honestly, I have no idea why Java doesn't benefit more. In theory, they should be equally as capable as .Net.

Am I saying .Net is perfect? By all means, no. In fact, .Net has seen some breaking changes

  1. I've a JIT bug that only happens when using .Net 4.5's runtime, not .Net 2.0's
  2. In .Net 4.5, they changed marshaling to be more "strict", breaking at least one program I've seen (at my work)

And mono of course is (by design), not a complete copy of Microsoft's .Net. In fact, I've even seen a bug where .Net accepted a piece of IL, where Mono broke, due to Microsoft not being "strict" about the ECMA spec.

With all that being said though, this seems to be the major leg-up for compiled to bytecode languages. They'll probably work a very long time, despite the bytecode runner being updated.

This is also avoidable. I've seen some scripting language use a version number attribute, so that it can avoid this scenario. I'm sure there are other methods as well.

All I know is, I'm tired having python 2 and 3 installed on my system because not all my programs will run on just one or the other.

Posted: 6/3/2013 5:49:16 AM

Command line SUMP logic analyzer client

So, I've been trying to poke around on this modem I received. One problem, I don't have an oscilloscope. I do however have an FPGA. And, FPGAs and be almost anything they wanted to be, so mine became a SUMP logic analyzer, thanks to the porting effort at gadget factory

Anyway, problem is the official Java client doesn't appear to work on Linux 3.x, or on 64-bit machines. I avoid having to do multilib (running 32bit programs on 64bit linux) like the plague, so I decided this will not do. On my quest I also found a half working python logic analyzer client. This one I got to work, but it's quite clunky and the code is GPL licensed. Where's the fun in that?

So, I wrote my own client, apparently in only a week. It's called monosump. It has zero external dependencies and as long as mono/.Net4 works, this client works. I've even tested it on the raspberry pi and it worked fine.

Here are the big features:

  • API-centric. The command line client is just a separate project consuming the easy to use API
  • Command line. Ever wanted to do some analysis with awk and friends? now you can
  • Plain text and JSON data output. Easy to consume.
  • BSD licensed. Have a commercial project in mind? Go ahead and use my code
  • Works everywhere there is a mono implementation(which is everywhere with the processing power to use this)
  • Simple (but limited) command line interface and powerful configuration file interface

Of course, this all being said. It sucks currently. I have a v1.0 release, but that's because I'm impatient. Next steps are getting analysis plugins, serial support at the command line, and maybe a simple web-app interface to take advantage of existing cross platform javascript libraries for graphing.

Posted: 5/29/2013 4:59:33 AM

Marketplaces Enforce Master-of-None Mentality

Marketplaces are great. On my Android phone I have, at my fingertips, a huge amount of applications that just work. Marketplaces provide us with a sense of security. To uninstall the app, there is guaranteed to be exactly one thing you must do. To install an app, there is exactly one way to install it. It is self contained, there are no dependencies I have to install. Configuration is non-existent, if at all. Discovering how to launch your app is straight forward. It just works.

Let's contrast that with a typical Linux system. I use Arch Linux. So, when I go to install an application, I use pacman -S someapp. And I cross my fingers and pray that it works. Usually it does. Sometimes I have to manually download and install things that aren't in this blessed "marketplace" of sorts. It's never as seemless as "closed" markets though. A linux application can do anything. It could corrupt my system(if I give it sudo), it could trash my home directory, it could install spam that I could never figure out how to uninstall.

These are two sides of a coin. They are naturally at ends. There isn't really a good way of curing these problems with Linux. Most people would say they aren't problems, but rather design choices(myself included).

marketplace

Dependencies... how I miss thee

So, what's this all about? If you look on the Android Marketplace, iOS AppStore, or god forbid the Windows Store, you'll see a stark difference compared to Arch Linux's packages. And no, it's not the open source aspect.

If you want to search through a file in Linux, you'll probably use something like

cat somefile | grep 'something'

you'll use the cat utility to read the file in and pipe the contents to grep, where grep will search across the file for "something".

How do you do that on Android? Or Windows 8/RT?

Basically, you can't. At least, not in a good way. With Android, file managers is possible, and most of them include some basic searching capabilities, but you won't get the power of grep. You won't be able to do awesome shit like you can by combining the strengths of different applications.

If I wanted to write a file search utility for Android, I'd have to first build a sub-par file browser to navigate to the file, and then implement my actual search functionality.

Markets enforce master-of-none mentality

I once had a magnificent plan to port my scripting language to Android. How much work would that require?

  1. File browsing/saving/loading
  2. Text editor (syntax highlighting, searching, etc. More than just a text box)
  3. My programming language

And that's just the start. If I want to provide APIs in my language to search in files, I have to implement that. If I want network access, I have to provide that. There is no netcat, or grep that people could utilize instead of my sub-par APIs.

Why netcat doesn't exist in markets

If you wanted to implement a netcat utility in any marketplace, it'd be fairly pointless. The power of netcat comes from being able to pipe it to other places that the original authors never even dreamed of. What's that, you want to make a TCP/IP proxy?

nc -l -p 8080 | nc example.com 80

You want something that can encrypt a file and send it off somewhere?

openssl aes-256-cbc -salt -e < file-to-transfer | nc example.com 9999

How would you do this in a marketplace application? Sure, maybe you could cobble together some solution like finding a dedicated TCP proxy. And then finding a file encrypter and a TCP/IP program that can send files... but this requires that someone developed such an application beforehand.

You can't just create some general purpose utility. You must create some "multi" purpose utility where you came up with all of the interesting use cases you could and implement them. If you missed one, then there just isn't a solution to that problem. There is no way to combine your program and some other program to solve the problem. It's all or nothing.

It's not just markets

If you notice, desktop Windows does this to a certain extent as well. It's I/O redirection is downright terrible. (although I hear Powershell is nice) This is probably why you see all-in-one applications everywhere. Linux has a general "air" about it that encourages you to make things modular and enable the utilization of other tools where possible.

However, marketplaces is the only place where this is actually enforced. Windows 8 has extremely limited IPC functions. Oh, you gave me a (very limited) search API that works across every application, big whoop. Windows 8 especially enforces it. Did you know that you can't make a general purpose text editor in Windows 8? Impossible. There is no way to open every file with a single application. They enforce you to declare which file extensions you'll be allowed to edit (and no, * doesn't work).

Finally, the bugs

Have you ever encountered a bug in a walled-garden application? Of course you have. Would you say you encounter them more than on desktop application? Probably. Developers can't worry about only one thing because if they don't implement it, then their application can't do it. You get a feature request in your netcat-want-to-be for sending text on-demand instead of files. Now you have to implement some kind of text editor. Now some people want to be able to return an automated response that returns the current date and time. Yea, good luck with keeping up with the wishes of your users.

Developers can't just worry about the one thing they do good. They also have to worry about all the things people might want to combine to make your application more useful. This is why I believe that most market applications have more bugs than their counterparts in desktop operating systems.

For the picky

Yes, I know I probably have some false assumptions, but I'm not far off. I'm no pro in Android and such. It's probably possible to do some rudimentary IPC and maybe even some kind of dependency stuff... but it's not the norm, and I know it's probably not easy for you OR the end user.

Posted: 4/30/2013 4:22:51 AM

A Proposal For Spam-Free Writeable APIs

I've been having an interest in Bitcoin recently, but it would appear I'm too late to the party to make any money on mining. So, what's the next best thing? Taking their idea and using it elsewhere.

The idea behind Bitcoin is to make a particular thing a rare commodity. Now let's pretend we have a website like say http://stackoverflow.com We want to make a public API for it that is writable. Current options appear to be

  1. API keys which require a human to register
  2. ????

I'll throw a second option into the mix. "API Coins" which require a fair bit of computing power to create and are only good in a certain context.

Let's say you wanted to make an account at stackoverflow with a machine that didn't require any human interaction, or rather, didn't require a captcha, valid email, personal info, etc. In theory, a program could register it completely in an automated fashion.

My proposal to prevent masses of spam bots: make it expensive. Use a bitcoin like scheme. Instead of SHA256, I'd go for scrypt because it's so mostly better on CPUs rather than GPUs, and thus capable of executing from Javascript.

So, when you visit the register page I provide something like

  1. Conditions a hash must match (difficultly)
  2. The value hashed must contain a certain provided phrase (to prevent pre-mining of API coins)
  3. That's it!

You calculate a hash which matches and poof! You've got an API key. Ideally, this would be a process that would take no more than 5 minutes on the slowest of hardware. Now, when you need to perform an operation, there will be another hash request, but it won't be as intense as the creation of your API key... but if you're a bad boy, your API key will get banned and you'll have to generate a new one.

Now, how does our site know that API keys are "valid" without pre-mining risk? The key is to make the nonce phrase be random and unique, but slightly persistent. So, when the request is made to get the nonce, it is stored for say an hour. If the API key isn't "found" within an hour or two, it's considered invalid. This would prevent batching of API key creation.

To help to enforce these "hard" checkpoints, if a user, say wanted to post a comment, they'd be given a request like the API key request. A certain difficulty and a phrase to be contained within the pre-hash value. Ideally, this would be significantly easier than generation of an API key.. You could also enforce throttling at this phase by increasing the difficulty for their account as they post more and more things.

The other awesome part about this scheme? It's anonymous other than the IP address in the logs. You can be reasonably sure that it's a human posting while getting absolutely no personal information and storing absolutely no personal information. No passwords needed. You effectively have a sort of private key instead, stored in a cookie or some such.

This also enables awesomely easy registration for users of your API users. "What's an API key?" crops up plenty. Eliminate the need for it!

Some unsolved problems with this approach however:

  1. How to link accounts with it? Assuming you'd want multiple API keys to each API user?
  2. Password to facilitate linking accounts?
  3. What if you lose your key?
  4. What about those mystical FPGA scrypt machines I've heard rumors about?

I might throw together an extremely simple "micro-blog" thing(twitter clone) that uses this concept just to see how it turns out. The hardest thing would probably be implementing scrypt in Javascript

Note One last thing. This isn't to "stop" spam. It's rather to make your site so expensive to spam that it's not profitable. Sure, you can always rent out a few hundred EC2 VMs or some such and compute a few hundred API tokens, but how much is that going to cost? How much do you expect to make from spamming that site?

Posted: 3/31/2013 4:11:10 AM

Breaking Changes For Everyone!

So, remember how I said there would be no more breaking changes to the router of BarelyMVC? Well, part of the whole "making it testable" meant that the current API as it was sucked major balls. We need some way to simple get an IServerContext into the created HttpHandler. It's not really possible without magic with the current way the API is... So, it's changing.

The Proof Of Concept for a tiny taste of the new API is here. Highlights:

  • Fluent API blog.Handles("/blog/new").With((c)=> c.New()).RequiresAuthentication()
  • Worry less about getting data from routes/forms into your HttpHandler methods
  • Treat handlers more like controllers
  • Make it so no more reliance on static class elements like HttpContext.Current
  • Will reduce code duplication for adding similar routes on the same "controller"
  • STILL no reflection or manual casting required! Not even an explicit generic parameter!

With the way I foresee this working, I can honestly say it looks significantly better than ASP.Net MVC's way of routing. I mean, we're talking FLUENT API cool. I'd dare to say it's also better than OpenRasta's form of routing.

In case you were too lazy to look at that gist. Here is an example:

var blog=router.Controller(() => new BlogController());
blog.Handles("/foo/bar").With((c) => c.View());

Can it read anymore like plain English? I don't believe so. And still, no magic, no reflection, no casting. Just good ol' fashion generic delegates and some neat compiler support for implicit generic parameters.

So, yes, it's a huge breaking change, but your code will suck less after migrating. Trust me, I have about 50 lines of code just for routing for this blog. I don't take breaking changes to routing lightly.

Posted: 2/14/2013 7:17:37 AM

BarelyMVC Roadmap

So, I've been working on BarelyMVC recently and established that there isn't a formal roadmap. I think that's a bit of a disgrace and wish to change that. So, here is the road map target for version 1.0(in order sorta)

  1. Rework to use IServerContext so the entire framework is easily mocked and unit testable(and as a result, the application built on top of it) (note, API should be fairly stable throughout this conversion)
  2. Strive for better unit test coverage(Don't plan on measuring it, but a lot better than right now)
  3. Get session support built into FSCAuth
  4. Integrate CacheGen into BarelyMVC
  5. Documentation and a tutorial or two
  6. Visual Studio and/or MonoDevelop project templates
  7. Compare and contrast document between ASP.Net MVC and BarelyMVC
  8. Setup a CI and/or nightly build server
Posted: 1/20/2013 7:50:03 PM

ASP.Net MVC and BarelyMVC performance comparison

So, I was very curious as to how ASP.Net MVC and BarelyMVC stacked up against each other performance wise. So, I did some benchmarking! I believe these numbers are fairly accurate, but I didn't build a dedicated machine for it, so they should be taken with a small grain of salt.

First off, the two test projects can be downloaded here. It's just two bare-bone projects. It's the ASP.Net MVC "welcome to MVC" type template site, and my recreation of that in BarelyMVC using the standard BarelyMVC style.

Platform

  • Arch Linux 64-bit (kernel 3.x)
  • 8G of RAM
  • 2 500G harddrives stuck together with RAID-1
  • Mono 2.10.8
  • AMD Pheneom II X-6 (6 cores)
  • Release mode builds with debugging disabled in the web.config
  • Served using Mono's xsp
  • Barebones sample. No database or other I/O
  • Each test was "warmed up" (I loaded a page before beginning the test, to let things compile where needed)
  • ab -n 10000 -c <concurrency> http://127.0.0.1:8080/ was the command used

And, here are the fancy charts I made:

Requests/second performance measurement

Time/request performance measurement

As you can tell, BarelyMVC blows ASP.Net MVC out of the water! Big things to note are that BarelyMVC can serve a request in just over 1ms in the best case. ASP.Net MVC needs at least 7ms. Also, an interesting thing to note is BarelyMVC's performance with very high concurrency actually stays kinda sane. A 100ms request is bearable. A 300ms is approaching noticeably slow. I also had the concurrency level of 1000 results done, but they made the graph harder to read. Hint: ASP.Net MVC didn't get better (although BarelyMVC started getting a bit insane as well).

Requests per second is known as a fairly useless metric, but I still think it has some use to show how much load a server can handle in massive concurrent usage.

Anyway, if you're considering making something that has to stand a lot of load and you're open to alternative(ie, non-Microsoft) frameworks, you should definitely take a look at BarelyMVC. It's API is fairly stable now and it's quickly approaching beta status. It's raw and to the metal with as little magic as possible.. but thanks to T4 and lamdas, it's still easy to read, write, and debug. (Also BSD licensed! :) )

Posted: 1/18/2013 4:30:51 AM

CacheGen Proof Of Concept

So, I finally have CacheGen to where I can probably integrate it into this website. I did some rough concurrency testing (spawning 60 threads accessing the cache with random clearing). It's a rough test, but it does show that there isn't anything obviously wrong with it at least.

So, the code it generates is brilliantly simple as well. Some good use cases for this:

  • Keep all your cache settings in one place
  • Statically typed and named! No more remembering manual casts or magic strings
  • Make your caching logic testable! It generates code against an easily mockable interface
  • Switch out your caching layer with ease.

Now, I'm only going to elaborate on the last point. "Why would I ever want to change out my caching layer!?"

Here's why. You built Bookface 1.0 and a few dozen users are on it. People start talking though and suddenly you have a few thousand(or more). You page response times have crept up into the seconds range. Something must be done. After upgrading servers, and expanding some of the hardware, you find the bottleneck. Your web server's caches are being cleared too often. There isn't anything you can do though, the memory is maxed out as it is. So, obvious choice: Use something like memcached for distributed caching on a dedicated server or two.

What's makes using memcached or something so hard? It requires code changes! Luckily for you, you used CacheGen though. Why? All of your caching is in one place, and your interface to the caching method(CacheMechanism) is in one single simple class. It's trivial to implement a two-level cache between ASP.Net and memcached at this point and all of your code relying on your cache will just magically work without being changed.

This is what I think makes CacheGen especially awesome. It manages your caching settings, makes everything statically typed, AND lets you have an almost unreal amount of flexibility.

It's not quite ready for primetime yet. I've proved that it should work, the thing now to do is clean up the API some and add some more unit testing to see if I can catch more bugs.

Anyway, I don't expect this process to take too long. I plan to tag an alpha release for this relatively soon (within the month)

Posted: 12/8/2012 7:21:17 AM