It's very unfortunate that you see dynamic allocation so often. That just shows how many bad C++ programmers there are.
In a sense, you have two questions bundled up into one. The first is when should we use dynamic allocation (using new
)? The second is when should we use pointers?
The important take-home message is that you should always use the appropriate tool for the job. In almost all situations, there is something more appropriate and safer than performing manual dynamic allocation and/or using raw pointers.
Dynamic allocation
In your question, you've demonstrated two ways of creating an object. The main difference is the storage duration of the object. When doing Object myObject;
within a block, the object is created with automatic storage duration, which means it will be destroyed automatically when it goes out of scope. When you do new Object()
, the object has dynamic storage duration, which means it stays alive until you explicitly delete
it. You should only use dynamic storage duration when you need it.
That is, you should always prefer creating objects with automatic storage duration when you can.
The main two situations in which you might require dynamic allocation:
- You need the object to outlive the current scope - that specific object at that specific memory location, not a copy of it. If you're okay with copying/moving the object (most of the time you should be), you should prefer an automatic object.
- You need to allocate a lot of memory, which may easily fill up the stack. It would be nice if we didn't have to concern ourselves with this (most of the time you shouldn't have to), as it's really outside the purview of C++, but unfortunately, we have to deal with the reality of the systems we're developing for.
When you do absolutely require dynamic allocation, you should encapsulate it in a smart pointer or some other type that performs RAII (like the standard containers). Smart pointers provide ownership semantics of dynamically allocated objects. Take a look at std::unique_ptr
and std::shared_ptr
, for example. If you use them appropriately, you can almost entirely avoid performing your own memory management (see the Rule of Zero).
Pointers
However, there are other more general uses for raw pointers beyond dynamic allocation, but most have alternatives that you should prefer. As before, always prefer the alternatives unless you really need pointers.
You need reference semantics. Sometimes you want to pass an object using a pointer (regardless of how it was allocated) because you want the function to which you're passing it to have access that that specific object (not a copy of it). However, in most situations, you should prefer reference types to pointers, because this is specifically what they're designed for. Note this is not necessarily about extending the lifetime of the object beyond the current scope, as in situation 1 above. As before, if you're okay with passing a copy of the object, you don't need reference semantics.
You need polymorphism. You can only call functions polymorphically (that is, according to the dynamic type of an object) through a pointer or reference to the object. If that's the behavior you need, then you need to use pointers or references. Again, references should be preferred.
You want to represent that an object is optional by allowing a nullptr
to be passed when the object is being omitted. If it's an argument, you should prefer to use default arguments or function overloads. Otherwise, you should preferably use a type that encapsulates this behavior, such as std::optional
(introduced in C++17 - with earlier C++ standards, use boost::optional
).
You want to decouple compilation units to improve compilation time. The useful property of a pointer is that you only require a forward declaration of the pointed-to type (to actually use the object, you'll need a definition). This allows you to decouple parts of your compilation process, which may significantly improve compilation time. See the Pimpl idiom.
You need to interface with a C library or a C-style library. At this point, you're forced to use raw pointers. The best thing you can do is make sure you only let your raw pointers loose at the last possible moment. You can get a raw pointer from a smart pointer, for example, by using its get
member function. If a library performs some allocation for you which it expects you to deallocate via a handle, you can often wrap the handle up in a smart pointer with a custom deleter that will deallocate the object appropriately.
You only need to worry about ABIs when you are crossing library boundaries. Within your own applications/libraries the ABI doesn't really matter as presumably all your object files are compiled with the same compiler version and switches.
If you have a library compiled with ABI1 and an application compiled with ABI2 then the application will crash when it tries to call functions from the library as it wont pass the arguments correctly. To fix the crash you would need to recompile the library (and any other libraries it depends on) with ABI2.
In your specific case as long as you compile nlohmann with the same compiler version as your application (or are just using nlohmann as a header) then you don't need to worry about the ABI change.
Globally suppressing the warning seems to be a dangerous option as it will prevent you from seeing any future ABI issues. A better option would be to use #pragma
to disable the warning just for the functions in question, e.g.:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wpsabi"
void foo()
{
}
#pragma GCC diagnostic pop
Best Answer
Working for about 4-5 years with software/firmware development and environment testing of miniaturized satellites*, I would like to share my experience here.
*(miniaturized satellites are a lot more prone to single event upsets than bigger satellites due to its relatively small, limited sizes for its electronic components)
Now, this situation is normally handled both in the hardware and software level. Here, as you request, I will share what we can do in the software level.
...recovery purpose.... Provide ability to update/recompile/reflash your software/firmware in real environment. This is an almost must-have feature for any software/firmware in highly ionized environment. Without this, you could have redundant software/hardware as many as you want but at one point, they are all going to blow up. So, prepare this feature!
...minimum working version... Have responsive, multiple copies, minimum version of the software/firmware in your code. This is like Safe mode in Windows. Instead of having only one, fully functional version of your software, have multiple copies of the minimum version of your software/firmware. The minimum copy will usually having much less size than the full copy and almost always have only the following two or three features:
...copy... somewhere... Have redundant software/firmware somewhere.
You could, with or without redundant hardware, try to have redundant software/firmware in your ARM uC. This is normally done by having two or more identical software/firmware in separate addresses which sending heartbeat to each other - but only one will be active at a time. If one or more software/firmware is known to be unresponsive, switch to the other software/firmware. The benefit of using this approach is we can have functional replacement immediately after an error occurs - without any contact with whatever external system/party who is responsible to detect and to repair the error (in satellite case, it is usually the Mission Control Centre (MCC)).
Strictly speaking, without redundant hardware, the disadvantage of doing this is you actually cannot eliminate all single point of failures. At the very least, you will still have one single point of failure, which is the switch itself (or often the beginning of the code). Nevertheless, for a device limited by size in a highly ionized environment (such as pico/femto satellites), the reduction of the single point of failures to one point without additional hardware will still be worth considering. Somemore, the piece of code for the switching would certainly be much less than the code for the whole program - significantly reducing the risk of getting Single Event in it.
But if you are not doing this, you should have at least one copy in your external system which can come in contact with the device and update the software/firmware (in the satellite case, it is again the mission control centre).
...detectable erroneous situation.. The error must be detectable, usually by the hardware error correction/detection circuit or by a small piece of code for error correction/detection. It is best to put such code small, multiple, and independent from the main software/firmware. Its main task is only for checking/correcting. If the hardware circuit/firmware is reliable (such as it is more radiation hardened than the rests - or having multiple circuits/logics), then you might consider making error-correction with it. But if it is not, it is better to make it as error-detection. The correction can be by external system/device. For the error correction, you could consider making use of a basic error correction algorithm like Hamming/Golay23, because they can be implemented more easily both in the circuit/software. But it ultimately depends on your team's capability. For error detection, normally CRC is used.
...hardware supporting the recovery Now, comes to the most difficult aspect on this issue. Ultimately, the recovery requires the hardware which is responsible for the recovery to be at least functional. If the hardware is permanently broken (normally happen after its Total ionizing dose reaches certain level), then there is (sadly) no way for the software to help in recovery. Thus, hardware is rightly the utmost importance concern for a device exposed to high radiation level (such as satellite).
In addition to the suggestion for above anticipating firmware's error due to single event upset, I would also like to suggest you to have:
Error detection and/or error correction algorithm in the inter-subsystem communication protocol. This is another almost must have in order to avoid incomplete/wrong signals received from other system
Filter in your ADC reading. Do not use the ADC reading directly. Filter it by median filter, mean filter, or any other filters - never trust single reading value. Sample more, not less - reasonably.