I'm starting from the pandas DataFrame documentation here: Introduction to data structures
I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN.
I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1
or so.
I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general.
import pandas as pd
import datetime as dt
import scipy as s
base = dt.datetime.today().date()
dates = [ base - dt.timedelta(days=x) for x in range(9, -1, -1) ]
valdict = {}
symbols = ['A','B', 'C']
for symb in symbols:
valdict[symb] = pd.Series( s.zeros(len(dates)), dates )
for thedate in dates:
if thedate > dates[0]:
for symb in valdict:
valdict[symb][thedate] = 1 + valdict[symb][thedate - dt.timedelta(days=1)]
Best Answer
NEVER grow a DataFrame row-wise!
Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do.
Here is my advice: Accumulate data in a list, not a DataFrame.
Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work,
pd.DataFrame
accepts both.pd.DataFrame
converts the list of rows (where each row is a scalar value) into a DataFrame. If your function yieldsDataFrame
s instead, callpd.concat
.Pros of this approach:
It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again.
Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed).
dtypes
are automatically inferred (rather than assigningobject
to all of them).A
RangeIndex
is automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration.If you aren't convinced yet, this is also mentioned in the documentation:
pandas >= 2.0 update:
append
has been removed!DataFrame.append
was deprecated in version 1.4 and removed from the pandas API entirely in version 2.0. See also this github issue that originally proposed its deprecation.These options are horrible
append
orconcat
inside a loopHere is the biggest mistake I've seen from beginners:
Memory is re-allocated for every
append
orconcat
operation you have. Couple this with a loop and you have a quadratic complexity operation.The other mistake associated with
df.append
is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes:Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to call the
infer_objects()
method to fix it:loc
inside a loopI have also seen
loc
used to append to a DataFrame that was created empty:As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as
append
, and even more ugly.Empty DataFrame of NaNs
And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith.
It creates a DataFrame of
object
columns, like the others.Appending still has all the issues as the methods above.
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.