We describe and validate a method for comparing programming languages or technologies or programming styles in the context of implementing certain programming tasks. To this end, we analyze a number of `little software systems’ readily implementing a common feature set. We analyze source code, structured documentation, derived metadata, and other computed data. More specifically, we compare these systems on the grounds of the NCLOC metric while delegating more advanced metrics to future work. To reason about feature implementations in such a multi-language and multi-technological setup, we rely on an infrastructure which enriches traditional software artifacts (i.e., files in a repository) with additional metadata for implemented features as well as used languages and technologies. All resources are organized and exposed according to Linked Data principles so that they are conveniently explorable; both programmatic and interactive access is possible. The relevant formats and the underlying ontology are openly accessible and documented.