data:solar_satire

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
data:solar_satire [2016/08/08 14:15]
jypeter Added link to the full python script
data:solar_satire [2019/11/18 10:58] (current)
jypeter Replaced files links with sharebox links
Line 5: Line 5:
   * 14C data for the last 9000 years (6754.5 BC to December 2015)   * 14C data for the last 9000 years (6754.5 BC to December 2015)
   * 10Be data for the years 885 CE to December 2015   * 10Be data for the years 885 CE to December 2015
 +
 +  * In both cases, the data is daily, starting on January 1st 1850, and yearly before
  
 The **14C-based data set** scaled to the CMIP6 historical forcing is the **recommended forcing for the PMIP4-CMIP6 //tier-1// past1000 experiment**. The **14C-based data set** scaled to the CMIP6 historical forcing is the **recommended forcing for the PMIP4-CMIP6 //tier-1// past1000 experiment**.
Line 17: Line 19:
  
 ===== Data format ===== ===== Data format =====
 +
 +<WRAP center round alert 60%>
 +Be careful when working with the time axis, because the //float year values// do not really follow [[http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#time-coordinate|time axis and calendar conventions]]!
 +</WRAP>
 +
  
 The original data files are provided in simple text format, and we also provide the data in netCDF format. The text files' structure is as follows: The original data files are provided in simple text format, and we also provide the data in netCDF format. The text files' structure is as follows:
   * 1st array: **wavelength array** in [nm], listing the center of each wavelength bin\\ ''[ 115.5,  116.5,  117.5,  118.5,  119.5,  [...] 100000.0078125 ,  120000.0234375 ,  140000.015625  ,  160000.015625  ]''   * 1st array: **wavelength array** in [nm], listing the center of each wavelength bin\\ ''[ 115.5,  116.5,  117.5,  118.5,  119.5,  [...] 100000.0078125 ,  120000.0234375 ,  140000.015625  ,  160000.015625  ]''
   * 2nd array: **wavelength bin** in [nm], listing the bin width of each wavelength bin\\ ''[ 1., 1., 1., 1. [...]   40., 5010., 14990.00488281, 20000.00585938, 20000., 20000., 20000.00585938, 20000.00585938, 20000., 20000. ]''   * 2nd array: **wavelength bin** in [nm], listing the bin width of each wavelength bin\\ ''[ 1., 1., 1., 1. [...]   40., 5010., 14990.00488281, 20000.00585938, 20000., 20000., 20000.00585938, 20000.00585938, 20000., 20000. ]''
-  * 3rd array: **time** in [year] (floating numbers)\\ For 14C: ''[-6754.5, -6753.5, -6752.5, -6751.5, [...]  2015.98632812,  2015.98901367,  2015.99182129, 2015.99450684,  2015.99731445]''+  * 3rd array: **time** in [year] (floating numbers) 
 +    * 69235 time steps for 14C: ''[-6754.5, -6753.5, -6752.5, -6751.5, [...]  2015.98632812,  2015.98901367,  2015.99182129, 2015.99450684,  2015.99731445]'' 
 +    * 61595 time steps for 10Be: ''[885.5, 886.5, 887.5, 888.5, [...] 2015.986, 2015.989, 2015.992,  
 +    2015.995, 2015.997]''
   * 4th array: **SSI reconstruction** in [W m-2 nm-1]. SSI is average SSI in corresponding bin.   * 4th array: **SSI reconstruction** in [W m-2 nm-1]. SSI is average SSI in corresponding bin.
  
Line 47: Line 57:
 </code> </code>
  
-  * The following python code shows how to deal with the original compressed text data. You can also check the {{:data:solar_txt2nc.py|full python script}} that was used to generate the netCDF files\\ <code python># Get directly the data from the bz2 compressed file+  * The following python code shows how to deal with the original compressed text data. You can also check the {{:data:solar_txt2nc.py|full python script}} that was used to generate the netCDF files 
 +    * <code python># Get directly the data from the bz2 compressed file
 file_in = bz2.BZ2File(input_full_path) file_in = bz2.BZ2File(input_full_path)
  
Line 81: Line 92:
 tsi = np.dot(ssi, wl_bin) tsi = np.dot(ssi, wl_bin)
 </code> </code>
 +  * The following python example shows how to determine the indices of specific years, before Jan 1850 1st and after. Once you know the indices of Jan 1st and Dec 31st, and if a year is a leap year or not
 +    * <code python> # 1 value per day, AFTER (and including) Jan 1st 1850
 +>>> Jan_01_1850_idx = np.argwhere(year == 1850)[0, 0]
 +>>> Jan_01_1851_idx = np.argwhere(year == 1851)[0, 0]
 +>>> Jan_01_1850_idx, Jan_01_1851_idx
 +(8605, 8970)
 +>>> Jan_01_1851_idx - Jan_01_1850_idx
 +365
 +>>> Dec_31_1850 = Jan_01_1851_idx - 1
 +>>> year[Jan_01_1850_idx], year[Dec_31_1850_idx], year[Jan_01_1851_idx]
 +(1850.0, 1850.9973, 1851.0)
 +
 +# 1 value per year, strictly BEFORE Jan 1st 1850
 +>>> year_50_idx = np.argwhere(year == 50.5)[0, 0] # DO NOT FORGET to use 'NNN.5' for the year value
 +>>> year_50_idx, year[year_50_idx]
 +(6805, 50.5)</code>
 +  * You can use the following if you want to do a weighted average over the time axis
 +    * <code python>>>> time_weights = np.ones((69235,))
 +>>> np.argwhere(year == 1850)
 +array([[8605]])
 +>>> tt[8605]
 +1850.0
 +>>> year[8604]
 +1849.5
 +# Note: it would be nicer to assign 366 to leap years below...
 +>>> time_weights[:8605] = 365 # Assign 365 to all time steps up to (but excluding) step 8605
 +>>> time_weights[8600:8620]
 +array([ 365.,  365.,  365.,  365.,  365.,    1.,    1.,    1.,    1.,
 +          1.,    1.,    1.,    1.,    1.,    1.,    1.,    1.,    1.,
 +          1.,    1.])
 +>>> ssi_average_weighted = np.average(ssi, axis=0, weights=time_weights)
 +</code>
 +
 ===== References ===== ===== References =====
    
Line 113: Line 157:
 You will find below a table with all the available data files, and their md5sum checksum (if you want to check that you download was OK, you can just type ''md5sum file.nc'' and compare the result to what is displayed in the table). You will find below a table with all the available data files, and their md5sum checksum (if you want to check that you download was OK, you can just type ''md5sum file.nc'' and compare the result to what is displayed in the table).
  
-If you want to download a file, click on the [[https://files.lsce.ipsl.fr/public.php?service=files&t=b4ed2299b5350972fcad2c435b95a004|PMIP4 SATIRE-M solar forcing data download link]] and then on the file you need. The files are currently protected by a password. Get in touch with [[pmip3web@lsce.ipsl.fr|Jean-Yves Peterschmitt]] if you need to access them. +If you want to download a file, click on the [[https://sharebox.lsce.ipsl.fr/index.php/s/LpiCUCkSmx0P6bb|PMIP4 SATIRE-M solar forcing data download link]] and then on the file you need. 
 + 
 +/* The files are currently protected by a password. Get in touch with [[johann.jungclaus@mpimet.mpg.de|Johann Jungclaus]] or [[pmip3web@lsce.ipsl.fr|Jean-Yves Peterschmitt]] if you need to access them. */
  
 ^ md5sum output ^ Data file ^ Size ^ ^ md5sum output ^ Data file ^ Size ^
  • data/solar_satire.1470665739.txt.gz
  • Last modified: 2016/08/08 14:15
  • by jypeter