Combining container hashes with C++14 metaprogramming – Cure for insomnia #1729

The most missed feature in standard C++ for thirteen years, more than lambdas or variadic templates, are the hash container equivalents of std::map and std::set. The “normal” map and set containers are implemented as binary trees, and any class of objects that one may want to store in a map or set must implement strict weak ordering – meaning that either the less-than operator must be defined for that class, or the container template is instantiated with a custom compare functor.

All of the fundamental types and most of the standard library types for which a strict weak ordering makes sense has the less-than operator defined. User defined types which have “key” members that are fundamental types can simply define their own strict weak ordering based on the lexicographical ordering of the tuple of those key members. This makes it somewhat easy to recursively apply strict weak order to user defined types composed of, or derived from, other user defined types.

Similarly, in modern C++, the hash containers require that the class of objects stored in them must have a hash operation defined. As with the normal map and set, the standard library also provides default hashing functions for fundamental types and standard library types for which a hash makes sense.

Unlike the normal map and set, it is less trivial to provide hash functions for user defined types. Naïve combinations of hashes resulting from hash functions may lead to more collisions than desired, for while the standard library may provide a hash function that is generally reasonable for the fundamental and standardized types, it doesn’t necessarily apply for user defined hashes.

What is needed is a straightforward way to hash multiple objects as one entity and without having to do tricky bit-twiddling ourselves. Fortunately, the standard library, and the updated C++ language, provides all that is needed. There are three elements that are required to make this work:
1. It must look like std::hash
2. The standard library provides std::bitset for arbitrarily sized (nb: but computable at compile time) strings of bits, and it also provides a std::hash specialization for std::bitsets
3. It should not take more than one or two logical lines of code for clients to specialize

These three elements form what I call the “vice approach” to design. It is both top-down – by specifying what the design looks like; and bottom-up – by specifying the components of the implementation. In the vice is the programmer’s head, and the process of resolving the top design and the bottom design is tightening the vice around the programmer’s head until it either explodes1 or the carbon fuses into diamond.

So, to wit, the top surface of the design, which otherwise could be viewed as “how would a programmer write” should look like this:

struct Department {std::bitset<7> dep_id;};
struct Employee {short id; std::u16string name; Department dep;};
// Such that we can write something resembling (not C++ syntax):
hash(Department d) → hash(d.dep_id);
//and
hash(Employee e) → hash(e.id, e.name, e.dep);

The bottom surface of the design should result in this:

std::bitset</* Size of combined hashes */> result = /* Bit shifting */;
return std::hash</* type of result */>{}(result);

We start tightening the vice by figuring out how to calculate the size of the final bitset. Without this, we’d have to resort to using vector and the runtime calculation of the size and degraded performance when used with hash containers.

We somehow have to calculate the total number of bits, which I’ll call area2, of the hashes of each key member of an arbitrary structure like Department or Employee in compile-time. This means the use of template metaprogramming; namely, using variadic templates and recursive list processing.

// Definition
template<typename Arg, typename... Args>
struct area_of
{
  static constexpr area_of_t<Arg> value =
    area_of<Arg>::value + area_of<Args...>::value;
};
// Partial specialization
template<typename Arg>
struct area_of<Arg>
{
  static constexpr area_of_t<Arg> value =
    sizeof(area_of_t<Arg>) * CHAR_BIT;
};

For those unfamiliar with template metaprogramming, what this code does is similar to what LISP or Prolog does. Say you have a list {1, 2, 3, 4, 5, 6, 7 …}. To go through each of the list of the element, you extract the first element – the head – and pass the tail list to the next operation, which is often the recursively to the same function. A programmer would write something like this:

area_of<bitset<7>> /* and */ area_of<short, u16string, Department>

So when we pass a list of types to area_of, it extracts the head Arg, and the tail is kept in the parameter pack Args. It calculates the area of the head by calling area_of::value, and then combines it to the area of the tail. Since the tail is itself still a list, the call to area_of::valuewill recursively calculate the area of its head and its tail etc etc. Args… is called template parameter unpacking which is almost like a macro that places the rest tail into the code. We have a partial specialization of area_of which takes one argument, and this is the exit condition that is required of any recursive process, whether it’s a runtime operation or a compile time operation. In that exit condition, it calculates the area of each individual part of the hash by multiplying its size in bytes by the number of bits per byte. CHAR_BIT is a standard library defined macro and should be used instead of assuming all compilers use octets for bytes.

The member value of area_of is declared as static constexpr. This is pretty much necessary of all template metaprogramming. constexpr tells the compiler to evaluate the expression at compile time if possible, and therefore usable as a template argument for something else.
The definition of area_of_t will be explained later, but it would suffice to say that it is the type of the integer that is returned by a hash function. At this point, we only have to consider that we only need to know the sizeof that integer type. It’s often defined as std::size_t but why assume when the compiler can work it out for you.

With the definitions of area_of, we now can figure out the bit size of any combination of hashes:

std::bitset<area_of<bitset<7>>> result;
std::bitset<area_of<short, u16string, Department>> result;

Now we get on to the task of tightening the other side of the vice. We’ll call it multihash and we define it thus:

template<typename...> struct multihash;
template<typename T> struct multihash<T> : std::hash<T>{};

We declare a struct template multihash with variadic template arguments. We don’t give it a definition, as there is no sensible default that covers all bases. We can, however, partially specialize it for one template argument and by default, we will make it equivalent to std::hash. This makes it easier later on where we want to calculate the hash of any single value and we don’t want to have to call the correct hash depending on whether it is a fundamental or library provided type versus a user defined type.

To put it another way, we would be able to call multihash or multihash and let the compiler figure out whether it calls the std::hash or the user defined version. Now that we have the easy stuff defined, we can now define a version of multihash that can do the work of hashing an arbitrary number of arguments.

template<typename Arg, typename... Args>
struct multihash<Arg, Args...>
{
  auto operator()(const Arg& arg, const Args&... args) const noexcept
  {
    std::bitset<area_of<Arg, Args...>::value> concat{
      multihash<Arg>{}(arg)
    };
    concat <<= (concat.size() - area_of<Arg>::value);
    concat |= multihash<Args...>{}(args...);
    return std::hash<decltype(concat)>{}(concat);
  }
};

multihash follows in the footsteps of std::hash in that is is a functor – an object that can be used like a function (has the operator() defined). One, is that one of its specializations inherits from std::hash, but more importantly, it needs to be a functor to be used in the hash containers. The operator() is declared as const noexcept to allow for optimizations, and uses return type deduction so that, again, the compiler can work out the integer type it needs to return rather than having us figure it out and maybe getting it wrong.

Similarly to area_of, this uses the list/head/tail recursive structure to move through the variadic template arguments one by one. Note that const Args&… is not the same kind of variadic functions as the printf clan. This is completely safe as, unlike printf, these parameter packs preserves both the number of arguments and the type of their arguments. It is as though the compiler generates a completely new function for every permutation of arguments that occurs in a program’s source code, but without the nastiness of variadic functions or C preprocessor macros.

The actual function itself basically calculates the hash of each individual argument, and concatenates them into the bitset concat, the area of which we have already calculated during compilation, and return the hash of that uber-bitset as the combined hash. Then we have completed the circle – tightened the vice.

What’s left is actually explaining where area_of_t comes from. It is defined like this:

template<typename T>
using area_of_t = decltype(multihash<T>{}(std::declval<T>()));

C++ has now introduced an alternative to typedef called aliases. We define the integer type of the area_of calculation to be the type returned when we use multihash an any single type. The use of declval and decltype allows us to get the compiler to evaluate the expression to figure out its type without actually having to run the program. Thus, there is already a definition for all of the fundamental types and library defined types, and also for any user defined types that have specialized multihash to provide their own hash definitions. So now we actually get to see how a programmer would use this library:

template
struct multihash
{
  auto operator()(const Department& dep) const noexcept
  {
    return multihash{}(dep.dep_id);
  }
};

template
struct multihash
{
  auto operator()(const Employee& emp) const noexcept
  {
    return multihash{}
             (emp.id, emp.name, emp.dep);
  }
};

This is not really that far off from our original goal of being able to define these in just one line. These are explicit instantions of multihash for the types Department and Employee. The multihash of Department must be defined before Employee’s because the multihash of Employee needs to “see” the definition for Department. Otherwise it would use the default, which is std::hash, for which we have not instantiated.

Note the judicious use of decltype in order to supply the template arguments. This just makes it much more easier if we, say, define Employee::name to be std::wstring instead and so we don’t have to change anything as we’ve told the compiler to figure it out. Finally, for completeness, we show the normal usage for this:

multihash emp_hash;
multihash dep_hash;

Department dep1{0b0'00'00'01};
Department dep2{0b0'00'00'10};
Department dep3{0b0'00'00'11};

Employee emp1{1, u"one", dep1};
Employee emp2{2, u"two", dep2};
Employee emp3{3, u"three", dep3};

std::cout << dep_hash(dep1) << std::endl;
std::cout << dep_hash(dep2) << std::endl;
std::cout << dep_hash(dep3) << std::endl;

std::cout << emp_hash(emp1) << std::endl;
std::cout << emp_hash(emp2) << std::endl;
std::cout << emp_hash(emp3) << std::endl;

std::unordered_set<Employee, multihash> empset{
  emp1, emp2, emp3};

Just because we can, we demonstrate the other new C++ features, such as binary literals (0b11010101), number literal separators (31’4159’26’535897’9), unicode string literals (u”I’m a string”), and braced initialization ({1, u”two”, dep3} or {emp3, emp2, emp1}).

Advertisement

Object Orientation is unsound

http://www.stlport.org/resources/StepanovUSA.html

I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras – families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting – saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms – you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.

It took a while for me to understand generic programming aka templates in C++ and this is the insight that got me there.

Literally nothing fits into clean hierarchies and not everything is an object.

Interesting exercise in brevity and clarity

Recently had a discussion and challenge in comparing two languages, C++ and Python. I think modern C++ is holds up really well to so-called scripting languages to do quick and dirty utility programs. This is a reasonably short implementation of a prime number finder:

#include <cstdio>
inline bool prime(const auto _candidate, const auto *_first, const auto *_last) {
  for (auto p = _first; p != _last && *p * *p <= _candidate; ++p)
    if (_candidate % *p == 0)
      return false;
  return true;
}
int main(int _c, char** _v) {
  const unsigned num_primes = 10000;
  static unsigned primes[num_primes] = {2, 3};
  for (unsigned i = 2; i < num_primes; ++i)
    for (primes[i] = primes[i-1] + 2; !prime(primes[i], primes + 1, primes + i); primes[i] += 2);
  printf("The %uth prime is: %u.n", num_primes, primes[num_primes - 1]);
  return 0;
}

Several things. Mostly I’ve learned to re-embrace the spirit of C/C++ for brevity, such as single statement if and for blocks. But you really have to think about readability when you code in that style. The brief C style is only bad if it’s done without consideration about code aesthetics. The brief style shouldn’t be about reducing line count, but about increasing readability. It is a bit counter-intuitive coming from a university education that told you to put every if block in braces over multiple lines.

When coded in such a manner, modern C++ can approach the ease of writing that languages like Python enjoy.

Template meta-programming rule of thumb

“Use template meta-programming to express design, not to express computation.”

Various explanations of template meta-programming uses examples like a compile-time Fibonnacci sequence. What those tutorials should be focusing on is how to use template meta-programming to hide incidental requirements of interfaces.

The ultimate goal of template meta-programming is to enable code like this:

int main()
{
    do_what_i_expect(/* args */);
    return 0;
}

Making templates easier with named-arguments

Say you have a template that requires a large number of arguments. ie, more than three. This is not a very clear or concise interface and not self-documenting. Any thing with a large number of arguments suffers from the same problem. You can give default arguments but if you just want to provide one argument that happens to be after one or more other default arguments, you have to provide those as well and you have to know the defaults if you want to keep the default behaviour bar the one you want to modify. My solution to provide named template arguments is this:

template<typename NumType,
         NumType TLow = std::numeric_limits<NumType>::lowest(),
         NumType TMin = std::numeric_limits<NumType>::min(),
         NumType TMax = std::numeric_limits<NumType>::max(),
         NumType TDef = 0,
         NumType TInv = -1,
         typename Specials = TestList<NumType>,
         typename Excludes = TestList<NumType> >
struct NumWrapper
{
    static constexpr NumType low = TLow;
    static constexpr NumType min = TMin;
    static constexpr NumType max = TMax;
    static constexpr NumType def = TDef;
    static constexpr NumType inv = TInv;
    typedef Specials inc_type;
    typedef Excludes exc_type;

    template<NumType Special>
    struct Min
    {
        typedef NumWrapper<NumType, low, Special, max, def, inv, Specials, Excludes> type;
    };

    template<NumType Special>
    struct Max
    {
        typedef NumWrapper<NumType, low, min, Special, def, inv, Specials, Excludes> type;
    };

    template<NumType Special>
    struct Low
    {
        typedef NumWrapper<NumType, Special, min, max, def, inv, Specials, Excludes> type;
    };

    template<NumType Special>
    struct Def
    {
        typedef NumWrapper<NumType, low, min, max, Special, inv, Specials, Excludes> type;
    };

    template<NumType Special>
    struct Inv
    {
        typedef NumWrapper<NumType, low, min, max, def, Special, Specials, Excludes> type;
    };

    template<NumType... List>
    struct Inc
    {
        typedef NumWrapper<NumType, low, min, max, def, inv, TestList<NumType, List...>, exc_type> type;
    };

    template<NumType... List>
    struct Exc
    {
        typedef NumWrapper<NumType, low, min, max, def, inv, inc_type, TestList<NumType, List...> > type;
    };

    NumType val = def;
    NumWrapper() = default;
    NumWrapper(NumType _v) : val(_v) {}

    operator NumType ()
    {
        return val;
    }
};

This class is something I needed to quickly create data ranges really easily in order to generate values for testing. I may want to provide a different minimum that is different from the underlying type but use the std::numeric_limits for the other values, or I may want to provide extra values that have a special meaning within the context of its use.

The named argument effect is achieved by declaring nested classes in the NumWrapper classes that have an internal typedef that creates a new NumWrapper type from the enclosing template instantiation. The internal typedef only instantiates on the template argument they “name”, and use the rest of the values from the enclosing template instantiation. The use of default template arguments in the main definition, and the inheritance of those arguments as you continue the typedef chain means the user does not then have to provide those values if they don’t want to.

Take special note that, as the library developer, you will of course need to know the order of the template arguments. You just have to make it so that the user of your library does not have to know the order.

Declaring a new integer type becomes as simple as this:

typedef NumWrapper<short>::Max<9999>::type::Def<1>::type::Min<-10>::type::Inc<2,3,5,7,11,13,17,19>::type MyIntegralType;

It also makes it easy to figure out what the expected range and values of this integral type should be. You can even automate its specialization for std::numeric_limits:

namespace std
{
    template<typename NumType, NumType TLow, NumType TMin, NumType TMax, NumType TDef, NumType TInv, typename Specials, typename Excludes>
    struct numeric_limits<NumWrapper<NumType, TLow, TMin, TMax, TDef, TInv, Specials, Excludes> > : numeric_limits<NumType>
    {
        static constexpr bool is_specialized = true;
        static constexpr NumType min()
        {
            return NumWrapper<NumType, TLow, TMin, TMax, TDef, TInv, Specials, Excludes>::min;
        }

        static constexpr NumType max()
        {
            return NumWrapper<NumType, TLow, TMin, TMax, TDef, TInv, Specials, Excludes>::max;
        }

        static constexpr NumType lowest()
        {
            return NumWrapper<NumType, TLow, TMin, TMax, TDef, TInv, Specials, Excludes>::low;
        }
    };

    template<typename NumType, NumType TLow, NumType TMin, NumType TMax, NumType TDef, NumType TInv, typename Specials, typename Excludes>
    struct is_arithmetic<NumWrapper<NumType, TLow, TMin, TMax, TDef, TInv, Specials, Excludes>> : is_arithmetic<NumType> {};
}

This way, the user will never have to specialize std::numeric_limits ever again.

One final note, you can use preprocessor macros to make this even more easier to write:

#define NUMTYPE(type) NumWrapper<type>
#define TLOW(low) ::Low<low>::type
#define TMIN(min) ::Min<min>::type
#define TMAX(max) ::Max<max>::type
#define TDEF(def) ::Def<def>::type
#define TINV(inv) ::Inv<inv>::type
#define TINC(...) ::Inc<__VA_ARGS__>::type
#define TEXC(...) ::Exc<__VA_ARGS__>::type

typedef NUMTYPE(short)TMAX(9999)TDEF(1)TMIN(-10)TINC(2,3,5,7,11,13,17,19) MyIntegralType;

Responses to the Invalid value concept, and responses to those

Link 1
Link 2

You could argue that an end iterator is the archetypal example of an “invalid value”. And end iterator does not represent any valid part of a vector. It’s one-past-the-end, which serves as a marker. And not even that, in set and map, where the one-past-the-end end iterator doesn’t even point to the physical end of the container’s range.

Or what if you’re writing a test generation library?1 You need to test invalid values, and you need a way to signal “I want you to generate an invalid value of a class to test this function that uses it”. For example, a function may only take a certain integer range. Any value outside of that range is invalid, and you should test for it.

If you should never create an invalid value, you may as well say you should never test how invalid values are handled.

I did consider calling it a null value, but a null value can be a valid representation of a concept. For a contrived example, what if you’re writing a simple wrapper around BSD sockets? If a socket cannot be created, you get a -1. That is an invalid value. If the divine decree is that a constructor should never create an invalid value, then such a socket wrapper class can never be a legal construct.

You could argue that you should throw an exception if an invalid value is about to be created, but exceptions aren’t always the desired behaviour.


  1. Which is how I found a potential need for it. 

A general algorithm for modern C++11 design

C++11’s main domain is systems programming. A system that does anything interesting is made up of components and its complexity emerges from the interaction of those components. Software engineering is about managing complexity, and C++11 helps you manage that complexity by offering you multiple programming styles1. As programmers, we choose which style best reduces the complexity of expressing the system’s constraints. One size doesn’t fit all.

This is my ad-hoc process whenever I’m in my design mode:

  1. Imagine a new programming language that does what you need. Don’t limit the syntax to C++11 – consider constructs from other languages or that don’t even exist.
  2. Imagine short sequences of actions written in that language that achieves the main goals of the system.
  3. Break down those actions into primitives – verbs and nouns that can be composed in any manner of sentences.
  4. Read up on C++11 and keep an eye out for language features that look like those primitives – eg, operator overloading, lambdas, return type deduction, variadics.
  5. Consider any similarities between primitives and work out if the differences between the similar primitives can be figured out during compile time by the compiler, or can only be figured out at runtime.
  6. If it can be figured out during compile time, consider using templates.
  7. If using templates, consider using preprocessor macros that improve readability and usability of using templates.

  1. Not paradigm. I view it as a stylistic choice. There is some aesthetic aspect to looking at code, but it’s a stretch to call syntactic sugar a paradigm. 

While I’m at it…

I guess my first piece of advice to coding C++11 is to always read the Standard Library documentation. Especially the Algorithms of the STL. Many complex algorithms, especially those that require loops, can be reduced to just a few lines of non-loop code. But an even more basic benefit is to never rely on remembering what anything is supposed to do. Just browsing the algorithms and rereading the specifications of those algorithms can often lead to ingenious ways to combine algorithms.