C++14 Features



This post is a summary of new features in C++14. Briefly C++14 feels like it is not a major version however it is more like an add-on for C++11. It improves features introduced in C++11.

Regarding compiler support for C++14 , you can see the table from en.cppreference.com :


  1. Constexpr  with less constraints : C++11 introduced constexpr however you can not use branches ( if elses ) and looping. In C++11 you had the workaround of using ternary expressions and recursions :

In C++14 , you can now use branches and loops and you can also have more than one return statements :

Note that you still have limitations such as :

  • You can not use goto , inline assembly , thread local and static variables
  • You can only assign literals to variables

The assembly output of above C++14 constexpr call can be seen as below :


2. Auto as return type , decltype(auto) and template functions 

In C++03 , you could not generalise the return type of a template function if it was depending on template arguments :

In C++11 , using decltype was not helping as the compiler was parsing from left to right. Therefore you could not compile the below example :

As a solution for C++11 , you  can use auto as returning type of a function in C++11, however you also had to use trailing return types ( part starting with -> ) as the parsing is from left to right :

In C++14, you will not need trailing return types anymore :

On the other hand , C++14 auto return type can not deduce constness or references. As a solution, you can use decltype(auto) as return value in order to keep constness or being a reference :

3. Lambda capture initialisers : You can evaluate arbitrary expressions in lambda captures , assign them to variables that is only in scope of your lambda expression :

4. Generic lambdas : You can now use auto for function arguments which allows you to write even more powerful lambda expressions :

5. Variable templates :

Before C++14, when you wanted to use template classes as simple value evaluators , you had to use templateClasss<input>::value idiom. On the other hand you do not to use that syntax anymore as now you can use variable templates which is more practical to code and more expressive. Below the first example is without use of variable templates and the 2nd one shows use variable templates :

Below you can see a nice use of this feature combined with std::accumulate :

6. Binary literals and digit seperators : You can define a numeric literal in hexadecimal using 0x notation. In C++14 , now you can use 0b notation for binary literals. Additionally ,you can use apostrophe to group bits byte-by-byte :

7. Heap ellision : This is a feature initially implemented by Clang compiler and eventually proposed and accepted for C++14. It basicalliy allows compiler to optimise out memory allocations. You can see the proposal here : http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3664.html 

Currently Clang compilers later than version 3.0 applies this optimisation. You can see that Clang applies this whereas GCC does not  , below :


Note that , this is a feature that can hide memory leaks as you move your project from one compile to another.

8. Library features :

a) std::make_unique : C++11 introduced std::make_shared however it lacked make_unique. In C++14 , now you can use make_unique from the standard library to construct your std::unique_ptr

b) std::shared_timed_mutex & std::shared_lock : It is a multiple reader and single writer mutex. It is quite useful for scenarios where there is frequent reads but rare updates.

std::shared_lock is similar to std::unique_lock and it supports exclusive locks. The difference is that it also supports std::shared_timed_mutex for shared-access pattern.

An example can be seen here :


c) Chrono literals : C++11 introduced user defined literals. C++14 provides prebuilt literals supporting Chrono date time libray which helps to produce more readible code ;





Calling C++ and user-mode APIs from Python

1. Introduction

This post shows how to call C and C++ functionality from Python using Python standard library`s C type. For its official reference , see :


Ctypes is a cross platform library therefore it can consume shared objects in Linux and also DLLs in Windows. It is quite useful in some situations :

  • You can create external test harnesses for your C++ projects by simply exposing a C interface
  • When you expose your C++ via C linkage , you might also be able to use that from other languages such as Java and C#
  • You can add new functionality to Python without dealing with ugly Python extension module APIs.
  • To quickly try POSIX/Linux or Windows APIs or any other external SDK that exposes functionality via C linkage

You can download all example source code from https://github.com/akhin/cplusplus_dev_toys/tree/master/calling_cpp_from_python

2. Why Ctypes instead of extension modules

I prefer using Ctypes instead of writing Python extension modules because I find Python module extension API very cumbersome and hard to read. On the other hand the only thing you need to expose your C++ functionality to Python is only using C linkage to use Python ctypes. Below you can see the reference page to develop Python extension modules :


3. C Linkage and extern “C”

You will need C linkage in order to expose functionality in your existing C++ code. Therefore you will need to create C functions in an extern “C” block. Extern “C” block is necessary so that the compiler can generate non-mangled function names. For more information :



4. Test shared object 

Below you can see our shared object`s code. It exposes a struct , functions with call by value and call by reference and uses C++ class instantiation in one of functions. Notice that it uses extern “C” for the exposed functions :

In order to build the code , you need :

g++ -shared -fPIC libtest.cpp -o libtest.so

5. Python code using the test shared object

First , will show the all code that consumes libtest.so :

Explanations are as below :

a. Importing ctypes library in line 2

b. We use cdll.LoadLibrary function to load a shared object

c. We have to specify return type and types of all arguments of a function in order to use it. You can see this in all function wrappers.

d. We can work with fundamental types by using Ctype`s mappings. Below you can see most used mappings :

int                              c_int

int*                            POINTER(c_int)

int**                          POINTER(POINTER(c_int))

char*                         c_char_p

char**                       POINTER(c_char_p)

void*                          c_void_p

void return value    None

e. You can use ctype`s byref method to pass Python objects to C++ by reference.

f. You can implement callbacks using function pointer in C/C++ and CFUNCTYPE in Python ctypes library. invokeCallback in Python passes Python function callback to shared object`s function and shared object uses that callback function.

g. You can map struct to Python by using a list of tuples. Shared objects struct Foo is mapped to Python this way. And after mapping a struct , you can call function that uses that struct. For example Python initCStruct passes a Foo object to the shared object`s function.

6. Win32 example – Color in Windows Console

As another use case, you can also work directly with OS APIs.  The sample below uses Windows APIs in order to use color in Windows console :

When you run it :


7. Linux example – Minimal X Window

Following the previous one , this example shows how to create a minimal X11 window by calling XWindow system`s shared object on Linux :

When executed , it looks like :



Visual Studio for existing remote Linux C++ projects using GDB

Introduction :

This post describes how to create a project on a remote Linux and also Windows subsystem for Linux , build it remotely and also debug it from Visual Studio step by step.

I previously used Netbeans to debug Linux C++ applications via remote SSH and GDB from Windows :


However I find VisualStudio setup is easier and more convenient to use.

Linux side :

The only setup you need in your Linux is an SSH server. You can either use password based authentication or public key based authentication from Visual Studio. See section 4 of previous blog post for an example setup on Debian :


It is written for Debian , however it is mostly same in other distributions. In case you want to use Windows Subsystem for Linux in your Windows10 or directly an Ubuntu ( jump to step C ) :

a) Install WSL , either Ubuntu or OpenSuse following this document : https://docs.microsoft.com/en-us/windows/wsl/install-win10

b) After installation , you will have to create an account with password authenticaton. Go to default Windows command prompt and type :

lxrun /setdefaultuser

Then specify a username and password.

c) You will want to install g++, gdb and SSH :

sudo apt-get update
sudo apt install -y build-essential
sudo apt install -y gdb
sudo apt install -y openssh-server
sudo service ssh –full-restart


CentOS : The main Linux distro I use is Centos 7. I had to update my existing openSSH daemon and restart it at least once to connect to it from Visual Studio :

yum install openssh openssh-server openssh-clients openssl-libsyum install openssh openssh-server openssh-clients openssl-libs systemctl restart sshd.service

Visual Studio side :

You can use either Visual Studio 2015 Community Edition and Visual C++ for Linux extension here : https://www.gallery.expression.microsoft.com/725025cf-7067-45c2-8d01-1e0fd359ae6e

However I would suggest using Visual Studio 2017 community edition for a few reasons :

a) You don`t even need an extension . Remote Linux setup comes with VS2017.

b) Whenever I debugged a remote Linux project with VS2015, the debugger will stop in certain unhandled Linux signals. Unfortunately I could not find a way of surpressing those. However you won`t have this issue with VS2017.

You can download VS2017 community edition from here : https://www.visualstudio.com/vs/cplusplus/

1 . You have to choose “Makefile project ( Linux ) ” in VisualStudio 2017 in order to use an existing project :


2. You will have to add all files to your VS solution. You can either add the existing files from remote if Linux server has Samba . In case you can`t do that you can also copy the project to your Windows system and add all files to your solution :


3. Go to your project`s settings and choose “Remote build”. You will need to specify the remote build command. That command will be running in home directory of your SSH connection. An example :


4. Go to your project`s settings and choose Debugging tab. Here you need to specify the binary output to debug for GDB. Paths you enter will be relative to the home directory of your SSH account :


Note that you can either use “gdb” which will be driving gdb via SSH or “gdbserver” . Just “gdb” is the method I use in this post. However if you want to run gdbserver ( it is independent from GDB ) in remote Linux whereas you are unable to install GDB,  you can also use “gdbserver” mode 6o connect to GDBserver running in your remote Linux.

5. An extra step at this point is setting your include path. That is extremely useful as you can feed VisualStudio`s s intelliSense and navigate native Linux headers. For simplicity , I copied my Linux` /usr/include directory to my local Windows and specified that path as one of include path in the project settings :


6. Finally , the last step is connecting to your Linux server. When you press debug button for the first time , you can connect using the connection screen :


And finally this is how it looks :


7. Linux console interaction : If you go to Debug -> Linux console , it will bring up Linux console window that you can interact with :


What about Visual Studio code :

Visual Studio Code is a great lightweight development environment that you can use for almost anything. I tried using from Windows targeting a remote Linux that runs gdbserver however couldn`t get it working. My current understanding is that it is not currently supporting remote GDB sessions from Windows to Linux server , however you can still connect to remote Linux/MacOSx servers from Linux/MacOSx clients as described here :


Context switches : Epoll vs multithreaded IO benchmark

1. Introduction and multiplexed io
You need to implement thread per client solution in the classic server implementation  that needs to handle multiple clients simultaneously. This approach requires constantly calling  socket apis.

However when using multiplexed io , you wait for events instead of polling everytime. Another advantage it brings is that you can implement such a server using only one thread which is nice to avoid  context switches. This post`s purpose is measuring different IO mechanisms which are available in user space. The implementations will use TCP, however all tests done on the same machine via loopback adapter in order to avoid network effects but focusing on kernel.

2. Select, poll and epoll

Linux provides select, poll and epoll for multiplexed io :




We will be using epoll  in this post as its biggest advantage is that you traverse events rather than  file descriptors to look for events. Therefore you don`t need to loop idle file descriptors.

Another note about epoll is that it provides various modes :

Level triggered mode : You will get an event notification as long as there is data to process. It means that , you will still continue
to get same events if you didn`t process the buffer.

Edge triggered mode : You will get notifications only once regardless you processed the buffer or not.

 3. About IO Patterns : Reactor and Proactor
Select, poll and epoll allows you to implement reactor IO pattern in which you wait for events : https://en.wikipedia.org/wiki/Reactor_pattern

Another similar IO pattern is called proactor : https://en.wikipedia.org/wiki/Proactor_pattern

In proactor pattern , you instead wait for completion of reading from a descriptor such as  a socket.  As searching for proactor patterns a while , I don`t think it is truly possible to implement in Linux as such kernel mechanism is not provided : https://stackoverflow.com/questions/2794535/linux-and-i-o-completion-ports

On the other hand it is possible to implement it in Windows using IO completion ports. You can see an example implementation here : https://xania.org/200807/iocp

Note that Boost.ASIO is actually using epoll beneath in order to implement proactor, therefore we can`t say it is a true proactor in Linux.

4. Thread per client implementation

Thread per client implementation has an always running thread to accept new connection and spawns a new thread per connection. It is using std::mutex only when a new connection happens in order to syncronise book keeping of connected clients.

The implementation of base TCPServer class which is used by both thread-per-client and reactor implementations to manage the connected peers  :

The  implementation of thread per client server which is derived from TCPServer on :

5. Reactor ( Epoll ) server implementation 

Reactor implementation accepts new connections and handles client events all on the same thread. Therefore does not require any syncronisation. It uses level triggered epoll for simplicity.

Firstly the implemenation of io_event_listener_epoll.cpp which an epoll wrapper on :

And you can see server_reactor.cpp which uses the epoll wrapper to implement a reactor server :

6. Dropped connections and socket buffer sizes 

I observed disconnection issues with high number of sockets and threads.  For example,  1024 sockets and threads for client automation  and same for thread-per-client server implementation even using loopback adapter on the same machine. All had the same symptom : client  automation program got socket error code 104 ( Connection reset by peer ) whereas could not spot any socket error in the server side.  However , one thing I noticed is that increasing socket receive and send buffer sizes helped. In order to set socket send and receive buffer sizes  system-wide :

echo ‘net.ipv4.tcp_wmem= 10240 1024000 12582912’ >> /etc/sysctl.conf echo ‘net.ipv4.tcp_rmem= 10240 1024000 12582912’ >> /etc/sysctl.conf

And eventually type “sysctl -p” ,therefore the system picks the changes up. Tried different default buffer sizes and observed similar results for different system-wide socket receive and send default buffer sizes  such as 1024000, 87380, 10240 and 128 bytes. Observed a high number of disconnections while benchmarking thread per client  server with 1024 clients/threads when socket buffer sizes were only 128 byte.

7. Benchmark

As I am measuring IO performances of different kernel mechanism from user space, I benchmarked in the same machine.. That is also useful to avoid any network effect as mainly interested only in IO and context switches.

You can specify number of clients and number of messages when using the client automation which I wrote for benchmarking. A thread will be spawned for each thread  and each thread will send the specified amount of messages to the connected server. And each thread will expect a response per message for automation to end.
At the end , the client automation will show you total elapsed time and average RTT ( round trip time ). It will also report number of disconnections which gives an idea about the accurracy of the results.

You can find the all source code of servers and client automation on : https://github.com/akhin/low_latency_experiments/tree/master/epoll_vs_multithreaded_io

During the benchmark system wide TCP buffer sizes were as below :

net.ipv4.tcp_wmem net.ipv4.tcp_wmem = 4096 87380 16777216

net.ipv4.tcp_rmem = 4096 87380 16777216

In all benchmarks , I used 100 ping-pongs between client automation and have changed number of clients ( threads ) in each benchmark in each benchmark. For 100 ping-pongs :

Client number         Epoll RTT                   Thread per client RTT
4                                   20 microseconds       62.5 microseconds
128                               23 microseconds       95 microseconds
1024                             30 microseconds       148 microseconds

8. Measuring context switches per thread using systemTap

I wanted to display the context per thread. Therefore first , I used named threads in server implementations using pthread_setname_np :


That allowed me to give a OS-level name to each thread ( basically process as thread are light-weight processes : https://en.wikipedia.org/wiki/Light-weight_process )

After that I prepared a short systemTap ( https://sourceware.org/systemtap/ ) script to measure context switch via Linux kernel sched_switch call :

In order to run the script above for a specific program :

stap context_switch.stp -c program_name

When you run this systemTap script , it will report number of context switches per thread. You can easily notice high number of total context switches in thread-per-client implementation compared to epoll/reactor implementation.

SystemTap probes are working system-wide therefore slowing down the system. So I have got outputs for 32 clients from thread per client server and epoll server.

Thread per client server context switch counts per thread :



Epoll/Reactor server context switch count for the single epolling thread :



C++ reflection using Clang

1. Introduction : In this post , I will show how to build a mini C++ reflection tool using Clang ( Libclang in Python ). Initially, I will talk about Clang/LLVM and their ecosystem.

2. Clang & LLVM  : Many people know Clang as a C++ compiler as an alternative and highly compatible to/with GCC. Clang actually is more than that. Clang is a front-end compiler and its result abstract syntax tree is also available to be used as a library. Basically, a compiler front end is front half a compiler which initially tokenize and then translate the source code into a traversable syntax tree. Then this abstract syntax tree is passed to the part which is called as a backend of a compiler. LLVM being a backend compiler compatible with Clang`s output parse tree , is responsible for converting the parse tree into instructions. Here is the part which compilation optimisations happen.

In this post , the example project will focus on working with an abstract syntax tree. Here you can read about it in Wikipedia :


To be more specific , below you can see a sample “foo” class :

And below you can see the abstract syntax tree produced by Clang :


Translation unit : As described by the standards , it is simplest compilation unit. In this foo.cpp is the translation unit.

Class_Decl : Class declaration

CXX_ACCESS_SPEC_DECL : Access specifier : private,public or protected

FIELD_DECL : A member of the class

CXX_METHOD : A member method of the class

3. The current ecosystem : As it is very easy to use Clang as a library, there are many utility tools built around it . Libclang is the API of Clang. In this post, I will be using it via its Python binding. There are also tools like clang-tidy ( static analysis ) , clang include fixer, clang libformat to format C++ source or to apply your project`s coding notation and Cling which is a command line C++ interpreter and more.

As for the backend side LLVM which converts the AST into machine code, there are many interesting projects on top of it such as :

NVCC : Nvidia`s modification on LLVM allows to write plain C++ and produce GPU assembly.

Mapd : Is a product using NVidia`s NVCC and they run SQL queries on GPU . This is optimising their process according to this article : https://devblogs.nvidia.com/parallelforall/mapd-massive-throughput-database-queries-llvm-gpus/

Emscripten : It is an opensource SDK from Microsoft which uses LLVM to compile C++ into asm.js in order to run it on existing browsers : https://github.com/kripken/emscripten

Cheerp : Converts C++ into HTML/Javascript : http://leaningtech.com/cheerp/

4. First steps with Clang : Initially, I will show the simplest Libclang code that recursively traverses the foo.cpp above. For this I will be using Python binding of libClang :

As seen in the example , we initially start with a translation unit and then traverse the syntax tree recursively. When we recurse into a child we increment the level counter and also we know that we are leaving a child node when the stack unwinding happens and then we decrement the level counter. Also, the print function takes  the level variable as argument in order to visualise tree structure in a very simple way.

5. Reflection tool : Reflection is the ability to access your code`s metadata in runtime. In other languages such as C# , it has been provided by the framework , whereas C++ does not provide the same functionality. In C++ , there are approaches such as using templates and C++11 SFINAE or adding an extra prebuild step to scan files and create reflection data as QT does. This example is closer to QT.  The biggest advantage in this approach is you do not need to make any changes to your existing source code. Basically we will traverse every node in the syntax tree and record the data we see :

Traverse each node in the tree recursively

When it sees a class declaration , we create a record for the class

When it sees an access specifier , we set the current access specifier level for following members

When it sees a member variable/method declaration , it creates a record which is associated with the current class and access specifier.

Therefore the tool first creates the data and then generates C++ code. Here is the source of the simple reflection tool :

And below you can see the output for foo class :

In order to use :

auto ret = Reflection::GetClassNames();
auto ret2 = Reflection::GetMembers(“Foo”);

6. What more can be done : Actually there are many things doable with Clang and the best example is static analysis tools. Others are generating serialization and reflection code , applying coding standards of your team/company , include fixing and many more. There are many startups working with Clang parser. Note that there is also LibTooling tool which helps to create standalone tools.

As for myself I am working on a dynamic execution analysis tool which also gets help from Clang`s AST output in order to find all possible call flows before starting dynamic analysis. Below you can see a screenshot of a SQLite database with call flow information of Doom source code :

7. Links : 

Clang official page : http://clang.llvm.org/

Detailed information about Clang AST :  https://www.youtube.com/watch?v=VqCkCDFLSsc

Generating serialization code : http://llvm.org/devmtg/2012-04-12/Slides/Wayne_Palmer.pdf

A more complete reflection example : http://austinbrunkhorst.com/blog/category/reflection/

An interesting reflection projects which gets its data from PDB files : http://msinilo.pl/blog2/post/p707/

GDB Debugging Automation with Python : Implementing a memory leak detector

1. Introduction : In this post , I will talk about how to automate debugging with GDB. And as an example project I will show a memory leak detector. The final leak detector can be seen here in action :

2. About GDB automation : Previously posted a small writing about Windbg: https://nativecoding.wordpress.com/2016/01/10/automate-attach-to-process-on-windows-with-windbg/

Similarly to Windbg , GDB supports script files, which allows you to save a batch of commands. You can also have those in gdb init file to load any time you load GDB. Additionally to that , gdb CLI interface also has a batch mode, therefore you can also automate GDB commands with an external Bash script. Below you can see a simple GDB script that dumps information about malloc calls :

To load a script like above , you just type “source malloc_dumper.txt” in GDB prompt.

However debugging automation can be really powerful when you use GDB`s Python API. Starting from GDB7, GDB comes with an embedded Python interpreter and also exposes a module named “gdb” for use from Python.

3. What can be done with GDB Python API : Firstly note that “gdb” module for Python will only be available when your script executed from GDB as it dynamically injects it to its interpreter. Therefore you will need to directly work with GDB prompt to explore it. Initially. Here is a list APIs :


To summarise what can be done generally :

  • You can do anything you could do with GDB scripts simply by doing “gdb.execute(“gdb_command_you_want_to_enter”)
  • You can create new GDB commands or  functions
  • You can create pretty printers. ( See the final Links section for an STL pretty printer example )
  • You can access breakpoints, frames & blocks and symbols, processs , threads and exceptions. values and more and all of these are provided as classes which makes job of automation very convenient compared to simple GDB scripting

4. Example project “memory leak detector” : I coded a small Python GDB extension script which dumps information about malloc,realloc, calloc and free calls from GNU LibC runtime. How it works briefly :

  • Places breakpoints for GNU LibC runtime memory functions. Also places breakpoints for main and exit function to detect start and the end of the session.
  • When a memory-function related breakpoint hit , it takes the control , captures the arguments passed to the function, captures callstack, and executes it until the end of its frame in order to capture the return value and then continues debugging
  • Also created a small Bash script , which makes it easy to use “memdump” extension. It basically executes GDB in batch mode, loads the Python script to GDB`s memory and executes it.

Note : As prerequisites you will need to install debug version of GNU LibC runtime. On Ubuntu :

sudo apt-get install libc6-dbg

And on CentOS :

yum install yum-utils
debuginfo-install glibc

And here you can see the Python implementation :

You can use the command below in order to start a GDB session by loading memdump.py :

gdb -batch -ex “source memdump.py” -ex ‘memdump’ -ex ‘r’ <debugee_execitable>

5. Analysing dump output : The previous GDB extension creates a text file with all memory operations information. I also implemented a separate Python script that parses the GDB extension`s output and finds out leaks. Basically the way it works :

  • For each calloc and malloc we add the memory event to a hash table by making the memory address key value
  • For each realloc , we remove the entry in the hash table for a previously allocated address and add a new entry with new memory address
  • For each free operation, we remove the entry from the dictionary.
  • Specifically to GNU LibC Runtime, we ignore memory operations which belong to directly GNU LibC Runtime`s internal functions
  • Finally each entry remaining in the dictionary gives us leaks.

Here is the analyser script :

6. Links : Here you can find a list of nice resources for the topic :

A presentation about GDB Python extensions : https://dmalcolm.fedorapeople.org/presentations/PyCon-US-2011/GdbPythonPresentation/GdbPython.html#1

A pretty printing example : http://hgad.net/posts/object-inspection-in-gdb/

A Python extension to make deadlock analysis easier : http://www.linuxjournal.com/article/11027?page=0,0

Another deadlock detector :



C++ exceptions with stack traces

  1. Introduction : In this post,  I will share a simple single header file “pretty_exception.h”. Basically it is for throwing exception messages with much more information. This one will have file, file line number and the function/method that is throwing the exception. Further more it also adds callstack information, can have colored console outputs, traces for syslog/Dbgview and even comes with a simple message box ( Windows only ). Below you can see outputs from Linux and Windows consoles :

And Windows console output :

If you enable tracing,  you can see exception trace in syslog on Linux:

[root@localhost ~]# tail -f /var/log/messages
Jul 24 22:36:58 localhost dbus[633]: [system] Successfully activated service ‘org.freedesktop.hostname1’
Jul 24 22:36:58 localhost systemd: Started Hostname Service.
Jul 24 22:40:01 localhost systemd: Starting Session 3 of user root.
Jul 24 22:40:01 localhost systemd: Started Session 3 of user root.
Jul 24 22:40:32 localhost chronyd[636]: Selected source
Jul 24 22:45:49 localhost systemd: Starting Cleanup of Temporary Directories…
Jul 24 22:45:50 localhost systemd: Started Cleanup of Temporary Directories.
Jul 24 22:48:38 localhost systemd: Starting Session 4 of user root.
Jul 24 22:48:38 localhost systemd: Started Session 4 of user root.
Jul 24 22:48:38 localhost systemd-logind: New session 4 of user root.
Jul 24 22:48:44 localhost slog[4589]: Exception type : std::runtime_error

Message : AAA

File : main.cppLine : 5

Callstack :

5 : ./pretty() [0x4021a4]
4 : ./pretty() [0x40197a]
3 : ./pretty() [0x401bad]
2 : /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6ca71bdaf5]
1 : ./pretty() [0x4016b9]

And you can use Microsoft`s Dbgview utility to see the exception trace on Windows :


Additionally on Windows, you can also have message boxes if you enable it in the header file :

2. Implementation notes :

  • File number , line number, function name : The code uses __LINE__ , __FILE__ macros. As for the function name , initially intended to use C99 __func__ however currently not using it as the callstack information already provides it.




  • Macro expansion : In order to have __FILE__ and __LINE__ macros , I had to define throw functionality as macros as __FILE__ and __LINE__ macros should be copied to the place of the caller by the preprocessor. However I needed to concatenate these predefined macros with my macros therefore I had to use the technique described perfectly on this page : http://stackoverflow.com/questions/19343205/c-concatenating-file-and-line-macros



  • Supporting string literals & std::string : In order to support both string literals and std::string as input message , we define a template convertToStdString function and an const char* overload for it :

inline std::string convertToStdString(const char* str){ return std::string(str);}

template <typename T>T convertToStdString(T const& t){ return t;}


3. Source code and usage : Its target platforms are Linux with GCC ( tested on CentOS7 with GCC4.8 ) and Windows with MSVC ( tested with MSVC2013 on Windows8) . The code initially check predefined macros to see if it is a supported system. For other platforms & compilers, the changes should be straightforward. Currently you can throw 4 different std::exception types : std::runtime_error, std::invalid_arg and std::length_error and std::logic_error. In order to use just include its header file and call one of the throw macros :

#include “pretty_exception.h”

void foo()

And finally here is the source code :