Wednesday 29 February 2012

Are "Women in tech" really in tech?

Anneke Jong has posted an interesting article called why we need to rethink women in tech, which points out the real reason for the seeming disparity in numbers of "women in tech" comes about due to the definition of "in tech" being used.

She explains that while several recent articles are applauding the increase of "women in tech" - these numbers are gathered from women who are, in essence, working in tech-adjacent roles (eg marketing and PR for IT firms, or tech-bloggers such as herself), rather than actually having coding skills - which would be the definition of "technical" for those working in the industry.

As a coder-girl myself, I can see her point. There has indeed been a great uptake of "women in tech"... but the increase of actually technically-competent women is much slower.

While the figures quoted in media seem good at first glance - they aren't telling the full story of the gender imbalance. Anneke makes a great comparison with the music industry: "Imagine your disappointment if only a third of the 'Top Women in Music' were musicians."

This is not to say there has been no progress over the last twenty years. When I began my Comp Sci degree, I was one of only seven women in a class of 220, which was no uncommon at the time - whereas now the proportion is closer to 1 in 10... but that is still a far cry from 50%

The article, of course, offers no solutions - because generating interest for computing (or maths, for that matter) in women is still one of the Hard Problems facing education... but it's a good discussion of the problem, and worth a read.

Friday 24 February 2012

I'm on AllTop for ruby

I made it to the AllTop ruby feed.

Apparently this is a big deal? They even have badges (see my ever-growing pile of badges at the bottom of my left-nav).

All part of my recent bout of SEO link-building experimentation in the last few weeks.

Saturday 18 February 2012

Industry-average productivity?

An interesting point here that is quite true. Effort isn't enough... however - is it really possible to compare groups of IT guys against "industry average"? What data do we have on "industry average"?

Have we gathered reliable data on any of these? How would be go about collecting it and is it even really possible to do that reliably?

I guess it's possible if you're doing work that's already been done (eg building a better mousetrap)... maybe if you're churning out cookie-cutter brochure-ware websites perhaps? Or Yet Another Spreadsheet App?

Even so - I'd be curious how you'd determine the productivity of other groups in your industry - what is it exactly that we're averaging? Time to complete features? Code quality? Lines Of Code? :|

Even if we decide on a metric, asking people in your own company about their past experiences will only give you a few datapoints - possibly distorted by 20/20 hindsight (or whether you want to come off looking better than reality might show up). I don't know of any reliable info out there apart from this kind of self-gathered stuff.

How do we go about gathering data across companies and how do we trust that data? It'd still be self-reported and therefore open to various forms of bias ("Yeah, my *old* team used to be awesome! nothing like this group of idiots!"), differing levels of slave-driving ("Yes, *my* team got it done in a week... just ignore the shackles on their legs!"), differing levels of quality ("we got it done over a weekend! security? testing? what's that?") and downright manipulation ("yeah, we're Team Awesome and can code up a custom-built website in only 24hours!!!!")

Not that it wouldn't be worth giving it a go.. but of course you still fall back on the problem of metrics. What metrics reliably indicate productivity (especially when we need to hold code quality constant)?

"Features" produced per week perhaps? but what is a "feature"? how big is it? how do we compare across companies and even across industries? It's a tough call - made even harder when you add the "quality" constraint. Is a cavalier team churning out unplanned, untested code full of security holes *more* productive than a careful one that designs something elegant and simple, robust and well-tested?

Measuring quality is always a tough nut to crack.

I've seen people try for "customer satisfaction" - but in our industry, it also pays to ask "which customer?" - the one paying the bill or the one that is forced to actually use the product? Another example of Who is the real user here?

So... are there any resources out there? and what's people's experiences of their quality?

Sunday 12 February 2012

Link: Who should learn to code

Who should learn to program presents a brief look at the reasons why everyone should learn to program, and also a very good reason why not. Worth a look.

Monday 6 February 2012

Estimates and hiking

A fairly standard question on quora: Why are software development task estimations regularly off by a factor of 2-3? has sparked some discussion.

The best answer given was an extremely entertaining hiking analogy (it's the top-voted answer - go see), explaining the software development process in terms of the fractal-nature of a coastline, with unexpected delays along the way.

I think it's a great analogy as it goes and would be extremely helpful in getting the idea of unexpected scope-creep across to a complete IT-newbie.

The answer then spawned a plethora of comments and even a counter-post that is also really interesting: Why Software Development Estimations Are Regularly Off, which explains that the hiking analogy is way off, and that software estimation is more like inventing.

I agree... though I still see great value in the hiking analogy for explaining "what goes wrong" on the kind of project that encounters the problems described.

The counter-post has itself sparked a discussion on Y-combinator which goes into a lot more detail about estimation issues in general.

It's all provided me with an entertaining read on an otherwise well-picked-over subject.