Skip to content
Home » Idl Read Hdf5 | 4/10- Hdf5 With Python: How To Read Hdf5 Files 287 개의 새로운 답변이 업데이트되었습니다.

Idl Read Hdf5 | 4/10- Hdf5 With Python: How To Read Hdf5 Files 287 개의 새로운 답변이 업데이트되었습니다.

당신은 주제를 찾고 있습니까 “idl read hdf5 – 4/10- HDF5 with Python: How to Read HDF5 Files“? 다음 카테고리의 웹사이트 https://ro.taphoamini.com 에서 귀하의 모든 질문에 답변해 드립니다: https://ro.taphoamini.com/wiki/. 바로 아래에서 답을 찾을 수 있습니다. 작성자 Noureddin Sadawi 이(가) 작성한 기사에는 조회수 55,743회 및 좋아요 297개 개의 좋아요가 있습니다.

idl read hdf5 주제에 대한 동영상 보기

여기에서 이 주제에 대한 비디오를 시청하십시오. 주의 깊게 살펴보고 읽고 있는 내용에 대한 피드백을 제공하세요!

d여기에서 4/10- HDF5 with Python: How to Read HDF5 Files – idl read hdf5 주제에 대한 세부정보를 참조하세요

Visit my personal web-page for the Python code:
https://www.softlight.tech/

idl read hdf5 주제에 대한 자세한 내용은 여기를 참조하세요.

Read a variable from a HDF5 file in IDL – gists · GitHub

Read a variable from a HDF5 file in IDL. GitHub Gist: instantly share code, notes, and snippets.

+ 여기에 더 보기

Source: gist.github.com

Date Published: 12/8/2021

View: 4371

How to Read and Visualize HDF-EOS5 MLS data Using IDL

This page presents a few examples on how to read and visualize HDF-EOS5 MLS data via IDL. IDL proves a set of functions for handling HDF5 files.

+ 여기에 표시

Source: hdfeos.org

Date Published: 8/3/2022

View: 4162

read_hdf5.pro – FIDASIM

shallow: Performs a shallow read i.e. no dataset/group attributes … test_1a_geometry.h5″) IDL> b = read_hdf5(“./test_1a_geometry.h5” …

+ 자세한 내용은 여기를 클릭하십시오

Source: d3denergetic.github.io

Date Published: 8/7/2021

View: 2170

HDF5 Overview – IRyA, UNAM

HDF5 functions that only return an error code are typically implemented as IDL procedures. An example is H5F_CLOSE, which takes a single file entifier …

+ 여기에 더 보기

Source: www.irya.unam.mx

Date Published: 11/1/2021

View: 9494

h5_browser

… user interface (GUI) to examine HDF5 files and import data into IDL. … The default is to only read datasets or attributes with 10 elements or less.

+ 여기에 더 보기

Source: ips.ucsd.edu

Date Published: 2/13/2021

View: 9041

H5_BROWSER

In this case, the IDL command line is blocked, and no further input is taken until … The following example starts up the HDF5 browser on a sample file:.

+ 여기에 자세히 보기

Source: lost-contact.mit.edu

Date Published: 7/2/2021

View: 424

Reading an HDF file with C, FORTRAN, Python, IDL, MATLAB …

Reading an HDF file with C, FORTRAN, Python, IDL, MATLAB and R … Open an hdf5 file call h5fopen_f (file_name, H5F_ACC_RDWR_F, file_, status) !

+ 여기에 표시

Source: www.icare.univ-lille.fr

Date Published: 10/9/2021

View: 340

HDF5 Tools in IDL – SlideShare

IDL The Interactive Data Language is the eal software for data analysis, visualization, and. The IDL HDF5 Module • Set of built-in IDL routines that prove …

+ 여기에 표시

Source: www.slideshare.net

Date Published: 8/29/2021

View: 1637

주제와 관련된 이미지 idl read hdf5

주제와 관련된 더 많은 사진을 참조하십시오 4/10- HDF5 with Python: How to Read HDF5 Files. 댓글에서 더 많은 관련 이미지를 보거나 필요한 경우 더 많은 관련 기사를 볼 수 있습니다.

4/10- HDF5 with Python: How to Read HDF5 Files
4/10- HDF5 with Python: How to Read HDF5 Files

주제에 대한 기사 평가 idl read hdf5

  • Author: Noureddin Sadawi
  • Views: 조회수 55,743회
  • Likes: 좋아요 297개
  • Date Published: 2016. 12. 29.
  • Video Url link: https://www.youtube.com/watch?v=xuWB_byi-6Q

Read a variable from a HDF5 file in IDL

What would you like to do?

Embed Embed this gist in your website. Share Copy sharable link for this gist. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address.

How to Read and Visualize HDF-EOS5 MLS data Using IDL

How to Read and Visualize HDF-EOS5 MLS data Using IDL

IDL (Interactive Data Language) is a programming language for scientific analysis and visualization. IDL provides APIs for reading and writing several file formats including HDF5 . This page presents a few examples on how to read and visualize HDF-EOS5 MLS data via IDL.

IDL provides a set of functions for handling HDF5 files. Since each of them is equivalent to an HDF5 C API, those who are familiar with the HDF5 library can easily learn how to handle HDF5 files in IDL. For example, H5F_OPEN , H5D_OPEN and H5D_READ are equivalent to H5Fopen , H5Dopen and H5Dread , respectively.

MLS swath data

HDF-EOS5 MLS L2 data is special in that it has n points of vertical profiles. The location of each point stored in 1-D lat/lon arrays that have n dimension size.

To access a dataset in an HDF5 file, first open the enclosing HDF-EOS5 MLS file using H5F_OPEN . Then, using the descriptor returned by that API, open the dataset.

Since MLS holds the data values inside the L2gpValue dataset, you need to open and read it along with Latitude, Longitude, and Pressure dataset as shown in example(Figure 1).

Figure 1 Accessing MLS datasets file_name = ‘MLS-Aura_L2GP-CO_v02-23-c02_2008d277.he5’

file_id = H5F_OPEN(file_name)

dataset_id_lat = H5D_OPEN(file_id, ‘/HDFEOS/SWATHS/CO/Geolocation Fields/Latitude’)

lat = H5D_Read(dataset_id_lat)

H5D_CLOSE, dataset_id_lat

dataset_id_lon = H5D_OPEN(file_id, ‘/HDFEOS/SWATHS/CO/Geolocation Fields/Longitude’)

lon = H5D_Read(dataset_id_lon)

H5D_CLOSE, dataset_id_lon

dataset_id_lev = H5D_OPEN(file_id, ‘/HDFEOS/SWATHS/CO/Geolocation Fields/Pressure’)

lev = H5D_Read(dataset_id_lev)

H5D_CLOSE, dataset_id_lev

dataset_id_val = H5D_OPEN(file_id, ‘/HDFEOS/SWATHS/CO/Data Fields/L2gpValue’)

val = H5D_Read(dataset_id_val)

H5D_CLOSE, dataset_id_val

Once all datasets are read, you pick one location. As shown in Figure 2, we pick the location at array index 0 in this example. The la0 and lo0 hold the values of latitude and longitude on that location. The * in val[*,0] retrieves all data values on different levels stored in lev at location 0. Finally, we re-scales data values and levels to generate a easy-to-read graph.

Figure 2 Reading datasests in MLS la0 = lat[0]

lo0 = lon[0]

x = val[*,0]

z = alog10(lev)

a = 10000000 * x

plot, a, z, xtitle=’CO (1.0e-07 vmr)’, ytitle=’Pressure (log hPa)’, title=’lat:’ + STRTRIM(la0) + ‘ lon:’ + STRTRIM(lo0), xrange=[-5, 5]

plot

Thecommand draws a vertical profile graph from thedataset.

After finishing processing all datasets, one needs to close the file using H5F_CLOSE (Figure 3).

See the complete code here. Download the sample file. Run the code as follows:

Figure 4 Running MLS IDL code idl mls_local.idl

Figure 5 shows the result.

hdf5.pro – FIDASIM

FUNCTION valid_name , name , bad_names = bad_names , post = post reserved = [ ‘AND’ , ‘BEGIN’ , ‘BREAK’ , ‘CASE’ , ‘COMMON’ , ‘COMPILE_OPT’ , $ ‘CONTINUE’ , ‘DO’ , ‘ELSE’ , ‘END’ , ‘ENDCASE’ , ‘ENDELSE’ , $ ‘ENDFOR’ , ‘ENDFOREACH’ , ‘ENDIF’ , ‘ENDREP’ , ‘ENDSWITCH’ , ‘ENDWHILE’ , $ ‘EQ’ , ‘FOR’ , ‘FOREACH’ , ‘FORWARD_FUNCTION’ , ‘FUNCTION’ , ‘GE’ , $ ‘GOTO’ , ‘GT’ , ‘IF’ , ‘INHERITS’ , ‘LE’ , ‘LT’ , ‘MOD’ , ‘NE’ , ‘NOT’ , ‘OF’ , $ ‘ON_IOERROR’ , ‘OR’ , ‘PRO’ , ‘REPEAT’ , ‘SWITCH’ , ‘THEN’ , ‘UNTIL’ , $ ‘WHILE’ , ‘X0R’ ] if not keyword_set ( bad_names ) then bad_names = ” if not keyword_set ( post ) then post = ” bad_names = [ reserved , bad_names ] if total ( strmatch ( bad_names , name , / fold_case )) ne 0 then begin valid_name = name + ‘_’ + post endif else begin valid_name = name endelse ; added 2016 – 09 – 15 by NGB to fix invalid structure tags valid_name = idl_validname ( valid_name , / convert_all ) return , valid_name END FUNCTION create_nested_struct , path , data rpath = reverse ( strsplit ( path , ‘/’ , / extract )) d = data for i = 0 , n_elements ( rpath ) – 1 do begin varname = valid_name ( rpath [ i ]) d = create_struct ( varname , d ) endfor return , d END FUNCTION hdf5_read_attributes , id , bad_names = bad_names ;; Get any attributes natts = h5a_get_num_attrs ( id ) atts = {} for i = 0 L , natts – 1 do begin ;; Open attribute id attribute_id = h5a_open_idx ( id , i ) ;; Get attribute name and make sure its valid attribute_name = h5a_get_name ( attribute_id ) attribute_name = valid_name ( attribute_name , $ bad_names = bad_names , $ post = strcompress ( string ( i ), / remove_all )) ;; Get the attribute data attribute_data = h5a_read ( attribute_id ) atts = create_struct ( atts , attribute_name , attribute_data ) ;; Close attribute id h5a_close , attribute_id endfor return , atts END FUNCTION hdf5_read_dataset , id , name , shallow = shallow ;; Get data dataset_id = h5d_open ( id , name ) data = h5d_read ( dataset_id ) ;; Get any attributes atts = hdf5_read_attributes ( dataset_id , bad_names= “data” ) ;; Close the dataset h5d_close , dataset_id if keyword_set ( shallow ) then begin return , data endif else begin return , create_struct ( atts , “data” , data ) endelse END FUNCTION hdf5_read_group , id , shallow = shallow FORWARD_FUNCTION hdf5_read_group nobjs = h5g_get_num_objs ( id ) d = {} for i = 0 , nobjs – 1 do begin obj_name = h5g_get_obj_name_by_idx ( id , i ) var_name = valid_name ( obj_name ) obj_info = h5g_get_objinfo ( id , obj_name ) obj_type = obj_info . type CASE obj_type OF ‘GROUP’: BEGIN gid = h5g_open ( id , obj_name ) var = hdf5_read_group ( gid , shallow = shallow ) h5g_close , gid if n_elements ( var ) ne 0 then begin d = create_struct ( d , var_name , var ) endif END ‘DATASET’: BEGIN var = hdf5_read_dataset ( id , obj_name , shallow = shallow ) if n_elements ( var ) ne 0 then begin d = create_struct ( d , var_name , var ) endif END ‘TYPE’: BEGIN tid = h5t_open ( id , obj_name ) var = hdf5_read_attributes ( tid ) h5t_close , tid if n_elements ( var ) ne 0 then begin d = create_struct ( d , var_name , var ) endif END ELSE : ENDCASE endfor if not keyword_set ( shallow ) then begin atts = hdf5_read_attributes ( id ) endif if n_elements ( atts ) ne 0 then begin return , create_struct ( d , atts ) endif else begin return , d endelse END FUNCTION hdf5_read_from_list , id , var_paths , flatten = flatten , shallow = shallow d = {} used_names = [] i = 0 L while i lt n_elements ( var_paths ) do begin catch , err_status if err_status ne 0 then begin print , ‘Error reading ‘ + var_paths [ i ] print ,! ERROR_STATE . MSG catch , / cancel i = i + 1 continue endif path = var_paths [ i ] obj_info = h5g_get_objinfo ( id , path ) obj_type = obj_info . type CASE obj_type OF ‘LINK’: BEGIN var_name = h5g_get_linkval ( id , path ) var_paths = [ var_paths , var_name ] END ‘GROUP’: BEGIN gid = h5g_open ( id , path ) var = hdf5_read_group ( gid , shallow = shallow ) h5g_close , gid if n_elements ( var ) ne 0 then begin if keyword_set ( flatten ) then begin var_names = strsplit ( path , ‘/’ , / extract ) var_name = valid_name ( var_names [ – 1 ], $ bad_names = used_names , $ post = strcompress ( string ( i ), / remove_all )) used_names = [ used_names , var_name ] d = create_struct ( d , var_name , var ) endif else begin d = create_struct ( d , create_nested_struct ( path , var )) endelse endif END ‘DATASET’: BEGIN var = hdf5_read_dataset ( id , path , shallow = shallow ) if n_elements ( var ) ne 0 then begin if keyword_set ( flatten ) then begin var_names = strsplit ( path , ‘/’ , / extract ) var_name = valid_name ( var_names [ – 1 ], $ bad_names = used_names , $ post = strcompress ( string ( i ), / remove_all )) used_names = [ used_names , var_name ] d = create_struct ( d , var_name , var ) endif else begin d = create_struct ( d , create_nested_struct ( path , var )) endelse endif END ‘TYPE’: BEGIN tid = h5t_open ( id , path ) var = hdf5_read_attributes ( tid ) h5t_close , tid if n_elements ( var ) ne 0 then begin if keyword_set ( flatten ) then begin var_names = strsplit ( path , ‘/’ , / extract ) var_name = valid_name ( var_names [ – 1 ], $ bad_names = used_names , $ post = strcompress ( string ( i ), / remove_all )) used_names = [ used_names , var_name ] d = create_struct ( d , var_name , var ) endif else begin d = create_struct ( d , create_nested_struct ( path , var )) endelse endif END ELSE : ENDCASE i = i + 1 endwhile return , d END FUNCTION read_hdf5 , filename , paths = paths , flatten = flatten , shallow = shallow ; +#read_hdf5 ; + Reads HDF5 file variables and attributes ; +*** ; +## Arguments ; + **filename**: HDF5 file ; + ; +## Keyword Arguments ; + **paths**: Paths to variables to be read ; + ; + **flatten**: Flatten tree structure ; + ; + **shallow**: Performs a shallow read i . e . no dataset / group attributes ; + ; +## Return Value ; + Structure containing variables and attributes ; + ; +## Example Usage ; + “` idl ; + IDL > a = read_hdf5 ( “./test_1a_geometry.h5” ) ; + IDL > b = read_hdf5 ( “./test_1a_geometry.h5” , paths= “/spec/lens” , / flatten , / shallow ) ; + “` if file_test ( filename ) then begin ;; Open file fid = h5f_open ( filename ) if not keyword_set ( paths ) then begin ;; Read group and sub – groups d = hdf5_read_group ( fid , shallow = shallow ) endif else begin ;; Read datasets from list d = hdf5_read_from_list ( fid , paths , flatten = flatten , shallow = shallow ) endelse ;; Close file h5f_close , fid endif else begin print , ‘File does not exist’ return , 0 endelse return , d END

HDF5 Overview

Open topic with navigation

HDF5 Overview

Current version: HDF5 1.8.4

The Hierarchical Data Format (HDF) version 5 file format was designed for scientific data consisting of a hierarchy of datasets and attributes (or metadata). HDF is a product of the National Center for Supercomputing Applications (NCSA), which supplies the underlying C-language library; IDL provides access to this library via a set of procedures and functions contained in a dynamically loadable module (DLM).

IDL’s HDF5 routines all begin with the prefix “H5_” or “H5*_”.

Programming Model

Hierarchical Data Format files are organized in a hierarchical structure. The two primary structures are:

The HDF5 group — a grouping structure containing instances of zero or more groups or datasets, together with supporting metadata.

The HDF5 dataset — a multidimensional array of data elements, together with supporting metadata.

HDF attributes are small named datasets that are attached to primary datasets, groups, or named datatypes.

Code Examples

Reading an Image

The following example opens up the hdf5_test.h5 file and reads in a sample image. It is assumed that the user already knows the dataset name, either from using h5dump , or the H5G_GET_MEMBER_NAME function.

PRO ex_read_hdf5

; Open the HDF5 file.

file = FILEPATH(‘hdf5_test.h5’, $

SUBDIRECTORY=[‘examples’, ‘data’])

file_id = H5F_OPEN(file)

; Open the image dataset within the file.

; This is located within the /images group.

; We could also have used H5G_OPEN to open up the group first.

dataset_id1 = H5D_OPEN(file_id, ‘/images/Eskimo’)

; Read in the actual image data.

image = H5D_READ(dataset_id1)

; Open up the dataspace associated with the Eskimo image.

dataspace_id = H5D_GET_SPACE(dataset_id1)

; Retrieve the dimensions so we can set the window size.

dimensions = H5S_GET_SIMPLE_EXTENT_DIMS(dataspace_id)

; Now open and read the color palette associated with

; this image.

dataset_id2 = H5D_OPEN(file_id, ‘/images/Eskimo_palette’)

palette = H5D_READ(dataset_id2)

; Close all our identifiers so we don’t leak resources.

H5S_CLOSE, dataspace_id

H5D_CLOSE, dataset_id1

H5D_CLOSE, dataset_id2

H5F_CLOSE, file_id

; Display the data.

DEVICE, DECOMPOSED=0

WINDOW, XSIZE=dimensions[0], YSIZE=dimensions[1]

TVLCT, palette[0,*], palette[1,*], palette[2,*]

; Use /ORDER since the image is stored top-to-bottom.

TV, image, /ORDER

END

Reading a Subselection

The following example reads only a portion of the previous image, using the dataspace keywords to H5D_READ.

PRO ex_read_hdf5_select

; Open the HDF5 file.

file = FILEPATH(‘hdf5_test.h5’, $

SUBDIRECTORY=[‘examples’, ‘data’])

file_id = H5F_OPEN(file)

; Open the image dataset within the file.

dataset_id1 = H5D_OPEN(file_id, ‘/images/Eskimo’)

; Open up the dataspace associated with the Eskimo image.

dataspace_id = H5D_GET_SPACE(dataset_id1)

; Now choose our hyperslab. We will pick out only the central

; portion of the image.

start = [100, 100]

count = [200, 200]

; Be sure to use /RESET to turn off all other

; selected elements.

H5S_SELECT_HYPERSLAB, dataspace_id, start, count, $

STRIDE=[2, 2], /RESET

; Create a simple dataspace to hold the result. If we

; didn’t supply

; the memory dataspace, then the result would be the same size

; as the image dataspace, with zeroes everywhere except our

; hyperslab selection.

memory_space_id = H5S_CREATE_SIMPLE(count)

; Read in the actual image data.

image = H5D_READ(dataset_id1, FILE_SPACE=dataspace_id, $

MEMORY_SPACE=memory_space_id)

; Now open and read the color palette associated with

; this image.

dataset_id2 = H5D_OPEN(file_id, ‘/images/Eskimo_palette’)

palette = H5D_READ(dataset_id2)

; Close all our identifiers so we don’t leak resources.

H5S_CLOSE, memory_space_id

H5S_CLOSE, dataspace_id

H5D_CLOSE, dataset_id1

H5D_CLOSE, dataset_id2

H5F_CLOSE, file_id

; Display the data.

DEVICE, DECOMPOSED=0

WINDOW, XSIZE=count[0], YSIZE=count[1]

TVLCT, palette[0,*], palette[1,*], palette[2,*]

; We need to use /ORDER since the image is stored

; top-to-bottom.

TV, image, /ORDER

END

Creating a Data File

The following example creates a simple HDF5 data file with a single sample data set. The file is created in the current working directory.

PRO ex_create_hdf5

file = filepath(‘hdf5_out.h5’)

fid = H5F_CREATE(file)

;; create data

data = hanning(100,150)

;; get data type and space, needed to create the dataset

datatype_id = H5T_IDL_CREATE(data)

dataspace_id = H5S_CREATE_SIMPLE(size(data,/DIMENSIONS))

;; create dataset in the output file

dataset_id = H5D_CREATE(fid,$

‘Sample data’,datatype_id,dataspace_id)

;; write data to dataset

H5D_WRITE,dataset_id,data

;; close all open identifiers

H5D_CLOSE,dataset_id

H5S_CLOSE,dataspace_id

H5T_CLOSE,datatype_id

H5F_CLOSE,fid

END

Reading Partial Datasets

To read a portion of a compound dataset or attribute, create a datatype that matches only the elements you wish to retrieve, and specify that datatype as the second argument to the H5D_READ function. The following example creates a simple HDF5 data file in the current directory, then opens the file and reads a portion of the data.

; Create sample data in an array of structures with two fields

struct = {time:0.0, data:intarr(40)}

r = REPLICATE(struct,20)

r.time = RANDOMU(seed,20)*1000

r.data = INDGEN(40,20)

; Create a file

file = ‘h5_test.h5’

fid = H5F_CREATE(file)

; Create a datatype based on a single element of the arrary

dt = H5T_IDL_CREATE(struct)

; Create a 20 element dataspace

ds = H5S_CREATE_SIMPLE(N_ELEMENTS(r))

; Create and write the dataset

d = H5D_CREATE(fid, ‘dataset’, dt, ds)

H5D_WRITE, d, r

; Close the file

H5F_CLOSE, fid

; Open the file for reading

fid = H5F_OPEN(file)

; Open the dataset

d = H5D_OPEN(fid, ‘dataset’)

; Define the data we want to read from the dataset

struct = {data:intarr(40)}

; Create datatype denoting the portion to be read

dt = H5T_IDL_CREATE(struct)

; Read only the data that matches our datatype. The

; returned value will be a 20 element structure with only

; one tag, ‘DATA’. Each element of which will be a [40]

; element integer array

result = H5D_READ(d, dt)

H5F_CLOSE, fid

The IDL HDF5 Library

The IDL HDF5 library consists of an almost direct mapping between the HDF5 library functions and the IDL functions and procedures. The relationship between the IDL routines and the HDF5 library is described in the following subsections.

Routine Names

The IDL routine names are typically identical to the HDF5 function names, with the exception that an underscore is added between the prefix and the actual function. For example, the C function H5get_libversion() is implemented by the IDL function H5_GET_LIBVERSION.

The IDL HDF5 library contains the following function categories:

Prefix Category Purpose H5 Library General library tasks H5A Attribute Manipulate attribute datasets H5D Dataset Manipulate general datasets H5F File Create, open, and close files H5G Group Handle groups of other groups or datasets H5I Identifier Query object identifiers H5R Reference Reference identifiers H5S Dataspace Handle dataspace dimensions and selection H5T Datatype Handle dataset element information

Functions Versus Procedures

HDF5 functions that only return an error code are typically implemented as IDL procedures. An example is H5F_CLOSE, which takes a single file identifier number as the argument and closes the file. HDF5 functions that return values are implemented as IDL functions. An example is H5F_OPEN, which takes a filename as the argument and returns a file identifier number.

Error Handling

All HDF5 functions that return an error or status code are checked for failure. If an error occurs, the HDF5 error handling code is called to retrieve the internal HDF5 error message. This error message is printed to the output window, and program execution stops.

Dimension Order

HDF5 uses C row-major ordering instead of IDL column-major ordering. For row major, the first listed dimension varies slowest, while for column major the first listed dimension varies fastest. The IDL HDF5 library handles this difference by automatically reversing the dimensions for all functions that accept lists of dimensions.

For example, an HDF5 file may be known to contain a dataset with dimensions [5][10][50], either as declared in the C code, or from the output from the h5dump utility. When this dataset is read into IDL, the array will have the dimensions listed as [50, 10, 5], using the output from the IDL help function.

HDF5 Datatypes

In HDF5, a datatype is an object that describes the storage format of the individual data points of a data set. There are two categories of datatypes; atomic and compound datatypes:

Atomic datatypes cannot be decomposed into smaller units at the API level.

cannot be decomposed into smaller units at the API level. Compound datatypes are a collection of one or more atomic types or small arrays of such types. Compound datatypes are similar to a struct in C or a common block in Fortran. See Compound Datatypes for additional details.

are a collection of one or more atomic types or small arrays of such types. Compound datatypes are similar to a struct in C or a common block in Fortran. See Compound Datatypes for additional details. In addition, HDF5 uses the following terms for different datatype concepts:

A named datatype is a datatype that is named and stored in a file. Naming is permanent; a datatype cannot be changed after being named. Named datatypes are created from in-memory datatypes using the H5T_COMMIT routine.

is a datatype that is named and stored in a file. Naming is permanent; a datatype cannot be changed after being named. Named datatypes are created from in-memory datatypes using the H5T_COMMIT routine. An opaque datatype is a mechanism for describing data which cannot be otherwise described by HDF5. The only properties associated with opaque types are the size in bytes and an ASCII tag string. See Opaque Datatypes for additional details.

is a mechanism for describing data which cannot be otherwise described by HDF5. The only properties associated with opaque types are the size in bytes and an ASCII tag string. See Opaque Datatypes for additional details. An enumeration datatype is a one-to-one mapping between a set of symbols and an ordered set of integer values. The symbols are passed between IDL and the underlying HDF5 library as character strings. All the values for a particular enumeration datatype are of the same integer type. See Enumeration Datatypes for additional details.

is a one-to-one mapping between a set of symbols and an ordered set of integer values. The symbols are passed between IDL and the underlying HDF5 library as character strings. All the values for a particular enumeration datatype are of the same integer type. See Enumeration Datatypes for additional details. A variable length array datatype is a sequence of existing datatypes (atomic, variable length, or compound) which are not fixed in length from one dataset location to another. See Variable Length Array Datatypes for additional details.

Compound Datatypes

HDF5 compound datatypes can be compared to C structures, Fortran structures, or SQL records. Compound datatypes can be nested; there is no limitation to the complexity of a compound datatype. Each member of a compound datatype must have a descriptive name, which is the key used to uniquely identify the member within the compound datatype.

Use one of the H5T_COMPOUND_CREATE or H5T_IDL_CREATE routines to create compound datatypes. Use the following routines to work with compound datatypes:

Example

See H5F_CREATE for an extensive example using compound datatypes.

Opaque Datatypes

An opaque datatype contains a series of bytes. It always contains a single element, regardless of the length of the series of bytes it contains.

When an IDL variable is written to a dataset or attribute defined as an opaque datatype, it is written as a string of bytes with no demarcation. When data in a opaque datatype is read into an IDL variable, it is returned as byte array. Use the FIX routine to convert the returned byte array to the appropriate IDL data type.

Use the H5T_IDL_CREATE routine with the OPAQUE keyword to create opaque datatypes. To create an opaque array, use an opaque datatype with the H5T_ARRAY_CREATE routine. A single string tag can be assigned to an opaque datatype to provide auxiliary information about what is contained therein. Create tags using the H5T_SET_TAG routine; retrieve tags using the H5T_GET_TAG routine. HDF5 limits the length of the tag to 255 characters.

Example

The following example creates an opaque datatype and stores within it a 20-element integer array.

; Create a file to hold the data

file = ‘h5_test.h5’

fid = H5F_CREATE(file)

; Create some data

data = INDGEN(20)

; Create an opaque datatype

dt = H5T_IDL_CREATE(data, /OPAQUE)

; Create a single element dataspace

ds = H5S_CREATE_SIMPLE(1)

; Create and write the dataset

d = H5D_CREATE(fid, ‘dataset’, dt, ds)

H5D_WRITE, d, data

; Close the file

H5F_CLOSE, fid

; Reopen file for reading

fid = H5F_OPEN(file)

; Read in the data

d = H5D_OPEN(fid, ‘dataset’)

result = H5D_READ(d)

; Close the file

H5F_CLOSE, fid

HELP, result

IDL prints:

RESULT BYTE = Array[40]

Note that the result is a 40-element byte array, since each integer requires two bytes.

Enumeration Datatypes

An enumeration datatype consists of a set of (Name, Value) pairs, where:

Name is a scalar string that is unique within the datatype (a given name string can only be associated with a single value)

is a scalar string that is unique within the datatype (a given name string can only be associated with a single value) Value is a scalar integer that is unique within the datatype

Note: Name/value pairs must be assigned to the datatype before it is used to create a dataset. The dataset stores the state of the datatype at the time the dataset is created; additional changes to the datatype will not be reflected in the dataset.

Create the enumeration datatype using the H5T_ENUM_CREATE function. Once you have created an enumeration datatype:

use the H5T_ENUM_INSERT procedure to associate a single name/value pair with the datatype

use the H5T_ENUM_VALUEOF function to retrieve the value associated with a single name

use the H5T_ENUM_NAMEOF function to retrieve the name associated with a single value

These routines replicate the facilities provided by the underlying HDF5 library, which deals only with single name/value pairs. To make it easier to read and write entire enumerated lists, IDL provides two helper routines at package the name/value pairs in arrays of IDL IDL_H5_ENUM structures, which have the following definition:

{IDL_H5_ENUM, NAME:”, VALUE:0}

The routines are:

H5T_ENUM_SET_DATA associates multiple name/value pairs with an enumeration datatype in a single operation. Data can be provided either as a string array of names and an integer array of values or as a single array of IDL_H5_ENUM structures.

H5T_ENUM_GET_DATA retrieves multiple name/value pairs from an enumeration datatype in a single operation. Data are returned in an array of IDL_H5_ENUM structures.

The H5T_ENUM_VALUES_TO_NAMES function is a helper routine that lets you retrieve the names associated with an array of values in a single operation.

The following routines may also be useful when working with enumeration datatypes:

H5T_GET_MEMBER_INDEX, H5T_GET_MEMBER_NAME , H5T_GET_MEMBER_VALUE

Example

The following example creates an enumeration datatype and saves it to a file. The example then reopens the file and reads the data, printing the names.

; Create a file to hold the data

file = ‘h5_test.h5’

fid = H5F_CREATE(file)

; Create arrays to serve as name/value pairs

names = [‘dog’, ‘pony’, ‘turtle’, ’emu’, ‘wildebeest’]

values = INDGEN(5)+1

; Create the enumeration datatype

dt = H5T_ENUM_CREATE()

; Associate the name/value pairs with the datatype

H5T_ENUM_SET_DATA, dt, names, values

; Create a dataspace, then create and write the dataset

ds = H5S_CREATE_SIMPLE(N_ELEMENTS(values))

d = H5D_CREATE(fid, ‘dataset’, dt, ds)

H5D_WRITE, d, values

; Close the file

H5F_CLOSE, fid

; Reopen file for reading

fid = H5F_OPEN(file)

; Read in the data

d = H5D_OPEN(fid, ‘dataset’)

dt = H5D_GET_TYPE(d)

result = H5D_READ(d)

; Close the file

H5F_CLOSE, fid

; Print the value associated with the name “pony”

PRINT, H5T_ENUM_VALUEOF(dt, ‘pony’)

; Print all the name strings

PRINT, H5T_ENUM_VALUES_TO_NAMES(dt, result)

Variable Length Array Datatypes

HDF5 provides support for variable length arrays, but IDL itself does not. As a result, in order to store data in an HDF5 variable length array you must:

Create a series of vectors of data in IDL, each with a potentially different length. All vectors must be of the same data type. Store a pointer to each data vector in the PDATA field of an IDL_H5_VLEN structure. The IDL_H5_VLEN structure is defined as follows: { IDL_H5_VLEN, pdata:PTR_NEW() } Create an array of IDL_H5_VLEN structures that will be stored as an HDF5 variable length array. The IDL_H5_VLEN structure is defined as follows: { IDL_H5_VLEN, pdata:PTR_NEW() } Create a base HDF5 datatype from one of the data vectors. Create an HDF5 variable length datatype from the base datatype. Create an HDF5 dataspace of the appropriate size. Create an HDF5 dataset. Write the array of IDL_H5_VLEN structures to the HDF5 dataset.

Note: IDL string arrays are a special case: see Variable Length String Arrays for details.

Creating a variable length array datatype is a two-step process. First, you must create a base datatype using the H5T_IDL_CREATE function; all data in the variable length array must be of this datatype. Second, you create a variable length array datatype using the base datatype as an input to the H5T_VLEN_CREATE function.

Note: No explicit size is provided to the H5T_VLEN_CREATE function; sizes are determined as needed by the data being written.

Example: Writing a Variable Length Array

; Create a file to hold the data

file = ‘h5_test.h5′

fid = H5F_CREATE(file)

; Create three vectors containing integer data

a = INDGEN(2)

b = INDGEN(3)

c = 3

; Create an array of three IDL_H5_VLEN structures

sArray = REPLICATE({IDL_H5_VLEN},3)

; Populate the IDL_H5_VLEN structures with pointers to

; the three data vectors

sArray[0].pdata = PTR_NEW(a)

sArray[1].pdata = PTR_NEW(b)

sArray[2].pdata = PTR_NEW(c)

; Create a dataype based on one of the data vectors

dt1 = H5T_IDL_CREATE(a)

; Create a variable length datatype based on the previously-

; created datatype

dt = H5T_VLEN_CREATE(dt1)

; Create a dataspace

ds = H5S_CREATE_SIMPLE(N_ELEMENTS(sArray))

; Create the dataset

d = H5D_CREATE(fid,’dataset’, dt, ds)

; Write the array of structures to the dataset

H5D_WRITE, d, sArray

Examples: Reading a Variable Length Array

Using the H5D_READ function to read data written as a variable length array creates an array of IDL_H5_VLEN structures. The following examples show how to refer to individual data elements of various HDF5 datatypes

Atomic HDF5 Datatypes

To read and access data stored in variable length arrays of atomic HDF5 datatypes, simply dereference the pointer stored in the PDATA field of the appropriate IDL_H5_VLEN structure. For example, to retrieve the variable b from the data written in the above example:

data = H5D_READ(d)

b = *data[1].pdata

Compound HDF5 Datatypes

If you have a variable length array of compound datatypes, the tag tag of the jth structure of the ith element of the variable length array would be accessed as follows:

data = H5D_READ(d)

a = (*data[i].pdata)[j].tag

Variable Length Arrays of Variable Length Arrays

If you have a variable length array of variable length arrays of integers, the kth integer of the jth element of a variable length array stored in the ith element of a variable length array would be accessed as follows:

data = H5D_READ(d)

a = (*(*data[i].pdata)[j].pdata)[k]

Compound Datatypes Containing Variable Length Arrays

If you have a compound datatype containing a variable length array, the kth data element of the jth variable length array in the ith compound datatype would be accessed as follows:

data = H5D_READ(d)

a = (*data[i].vl_array[j].pdata)[k]

Variable Length String Arrays

Because the data vectors referenced by the pointers stored in the PDATA field of the IDL_H5_VLEN structure must all have the same type and dimension, strings are handled as vectors of individual characters rather than as atomic units. This means that each element in a string array must be assigned to an individual IDL_H5_VLEN structure:

str = [‘dog’, ‘dragon’, ‘duck’]

sArray = REPLICATE({IDL_H5_VLEN},3)

sArray[0].pdata = ptr_new(str[0])

sArray[1].pdata = ptr_new(str[1])

sArray[2].pdata = ptr_new(str[2])

Use the H5T_STR_TO_VLEN function to assist in converting between an IDL string array and an HDF5 variable length string array. The following achieves the same result as the above five lines:

str = [‘dog’, ‘dragon’, ‘duck’]

sArray = H5T_STR_TO_VLEN(str)

Similarly, if you have an HDF5 variable length array containing string data, use the H5T_VLEN_TO_STR function to access the string data:

data = H5D_READ(d)

str = H5T_VLEN_TO_STR(data)

NAME: H5_BROWSER PURPOSE: Provides a graphical user interface (GUI) to examine HDF5 files and import data into IDL. CALLING SEQUENCE: Result = H5_BROWSER(Files [, /DIALOG_READ] ) RETURN VALUE: Result: If the DIALOG_READ keyword is set, then the Result is either a structure containing the dataset/group, or a 0 if the Cancel button was pressed. If DIALOG_READ is not set, then the Result is the widget ID for the base widget. INPUTS: Files: A scalar or array of strings giving the file(s) to open in the browser. Users can also interactively import new files. Files may contain wildcard characters. KEYWORD PARAMETERS: DIALOG_READ = If this keyword is set then the HDF5 browser is created as a modal Open/Cancel dialog instead of a standalone GUI. In this case, the IDL command line is blocked, and no further input is taken until the Open or Cancel button is pressed. If the GROUP_LEADER keyword is specified, then that widget ID is used as the group leader, otherwise a default group leader base is created. All keywords to WIDGET_BASE such as GROUP_LEADER, TITLE, etc. are passed on to the top level base. EXAMPLE file = FILEPATH(‘hdf5_test.h5’, SUBDIR=[‘examples’,’data’]) r = H5_BROWSER(file) CALLS: *** ARRAY_INDICES, CONGRID, CW_TREESTRUCTURE, FILEPATH, H5_BROWSER_ADDFILE H5_BROWSER_CHECKNAME, H5_BROWSER_EVENT, H5_BROWSER_FILEEXIT H5_BROWSER_FILEOPEN, H5_BROWSER_GET_PALETTE, H5_BROWSER_IMPORT H5_BROWSER_KILLNOTIFY, H5_BROWSER_PARSEVALUE, H5_BROWSER_STRMIXCASE H5_BROWSER_TOGGLE, H5_BROWSER_TREE_EVENT, H5_BROWSER_TREE_PREVIEW, H5_PARSE LOADCT, REVERSE, STRSPLIT, XMANAGER MODIFICATION HISTORY: Written by: CT, RSI, June 2002 Modified by: AJ, CREASO B.V., February 12th 2003: Use with ENVI

NAME: H5_CREATE PURPOSE: Creates an HDF5 file based on a nested structure containing all of the groups, datasets, and attributes. CALLING SEQUENCE: H5_CREATE, File, Data INPUTS: FILE: A scalar string giving the file to parse. DATA: A (nested) structure KEYWORD PARAMETERS: NONE CALLS: *** H5_CREATE_IDL_CREATE, IDLFFH5CREATE::H5_CREATE_ATTRIBUTE IDLFFH5CREATE::H5_CREATE_DATASET, IDLFFH5CREATE::H5_CREATE_DATATYPE IDLFFH5CREATE::H5_CREATE_DATATYPE_GET_DATA, IDLFFH5CREATE::H5_CREATE_GROUP IDLFFH5CREATE::H5_CREATE_IDL_CREATE IDLFFH5CREATE::H5_CREATE_IDL_CREATE_STRUCT, IDLFFH5CREATE::H5_CREATE_LINK IDLFFH5CREATE::H5_CREATE_VALIDATE_STRUCTURE, IDLFFH5CREATE::INIT IDLFFH5CREATE__DEFINE MODIFICATION HISTORY: Written by: AGEH, RSI, August 2004 Modified by:

NAME: H5_PARSE PURPOSE: Parses an HDF5 file and returns a nested structure containing all of the groups, datasets, and attributes. CALLING SEQUENCE: Result = H5_PARSE(File [, /READ_DATA]) or Result = H5_PARSE(Hid, Name [, FILE=string] [, PATH=string] [, /READ_DATA]) RETURN VALUE: Result: A nested structure. INPUTS: File: A scalar string giving the file to parse. Hid: An integer giving the identifier of the file or group in which to access the object. Name: A string giving the name of the group, dataset, or datatype within Hid to parse. KEYWORD PARAMETERS: FILE = Set this optional keyword to a string containing the filename to which the Hid belongs. This value is only used to fill in the FILE field within the structure. This keyword is ignored when the File argument is supplied. PATH = Set this optional keyword to a string containing the fully qualified path within the HDF5 file of the Hid group. This value is only used to fill in the PATH field within the structure. This keyword is ignored when the File argument is supplied. READ_DATA = Set this keyword to automatically read in all data while parsing the file. The default is to only read datasets or attributes with 10 elements or less. CALLS: *** H5_ATTRIBUTE_PARSE, H5_DATASET_PARSE, H5_DATASPACE_PARSE, H5_DATATYPE_PARSE H5_GROUP_PARSE, H5_OBJECT_PARSE, H5_PARSE_ATTRIBUTES, H5_PARSE_READDATA H5_PARSE_VALIDATE_TAGNAME CALLED BY: H5_BROWSER EXAMPLE: Parsing an entire file: file = FILEPATH(‘hdf5_test.h5’, SUBDIR=[‘examples’,’data’]) struc = H5_PARSE(file, /READ_DATA) image = struc.images.eskimo palette = struc.images.eskimo_palette DEVICE, DECOMPOSED=0 WINDOW, XSIZE=image._dimensions[0], YSIZE=image._dimensions[1] TVLCT, TRANSPOSE(palette._data) TV, image._data, /ORDER Parsing an already open group: hid = H5F_OPEN(file) gid = H5G_OPEN(hid, ‘/arrays’) struc = H5_PARSE(gid, ‘2D float array’, $ FILE=’hdf5_test.h5′, PATH=’/arrays’, /READ_DATA) TVSCL, struc._data MODIFICATION HISTORY: Written by: CT, RSI, June 2002 Modified by:

NAME: H5T_ENUM_GET_DATA PURPOSE: Retrieves all the data from an enumeration datatype and bundles it up into an array of structures. CALLING SEQUENCE: result = H5T_ENUM_GET_DATA(datatype_id) PARAMETERS: DATATYPE_ID : An integer giving the identifier of the enumeration datatype. KEYWORD PARAMETERS: NONE CALLED BY: H5T_ENUM_VALUES_TO_NAMES MODIFICATION HISTORY: Written by: AGEH, RSI, June 2005 Modified:

NAME: H5T_ENUM_SET_DATA PURPOSE: Sets multiple name/value data pairs on an enumeration datatype. CALLING SEQUENCE: H5T_ENUM_SET_DATA, datatype_id, data, values PARAMETERS: DATATYPE_ID – An integer giving the identifier of the enumeration datatype. DATA – If Data is a string array then Data gives the names of the corresponding members and Values is required. If Data is an array of structures, each with two fields, NAME, a string, and VALUE, an integer, then Data supplies all the needed information and Values is ignored. VALUES – An integer array giving the values of the corresponding members. This is needed only if Data is a string array. KEYWORD PARAMETERS: NONE MODIFICATION HISTORY: Written by: AGEH, RSI, June 2005 Modified:

NAME: H5T_ENUM_VALUES_TO_NAMES PURPOSE: Converts values returned from H5D_READ to names CALLING SEQUENCE: result = H5T_ENUM_VALUES_TO_NAMES(datatype_id, values) PARAMETERS: DATATYPE_ID : An integer giving the identifier of the enumeration datatype. VALUES : A integer array of data returned from H5D_READ. KEYWORD PARAMETERS: NONE CALLS: *** H5T_ENUM_GET_DATA MODIFICATION HISTORY: Written by: AGEH, RSI, July 2005 Modified:

NAME: H5T_STR_TO_VLEN PURPOSE: Converts an IDL string array to an IDL_H5_VLEN array of strings CALLING SEQUENCE: result = H5T_STR_TO_VLEN(array) PARAMETERS: ARRAY : A string array KEYWORD PARAMETERS: NO_COPY : If set the original data will be lost after the routine exits MODIFICATION HISTORY: Written by: AGEH, RSI, July 2005 Modified:

NAME: H5T_VLEN_TO_STR PURPOSE: Converts an IDL_H5_VLEN array of strings to an IDL string array CALLING SEQUENCE: result = H5T_VLEN_TO_STR(array) PARAMETERS: ARRAY : An array of IDL_H5_VLEN structures pointing to strings KEYWORD PARAMETERS: PTR_FREE : If set then free the pointers in the IDL_H5_VLEN array MODIFICATION HISTORY: Written by: AGEH, RSI, July 2005 Modified:

NAME: H_EQ_CT PURPOSE: Histogram-equalize the color tables for an image or a region of the display. CATEGORY: Image processing. CALLING SEQUENCE: H_EQ_CT, Image ;To histogram equalize from an image. H_EQ_CT ;To histogram equalize from a region INPUTS: Image: Image whose histogram is to be used in determining the new color tables. If this value is omitted, the user is prompted to mark the diagonal corners of a region of the display. Image MUST be a byte image, scaled the same way as the image loaded to the display. OUTPUTS: No explicit outputs. The result is applied to the current color tables. CALLS: *** BOX_CURSOR COMMON BLOCKS: COLORS: The IDL color table common block. SIDE EFFECTS: The current color table is modified. RESTRICTIONS: If a parameter is supplied, it is assumed to be an image that was just displayed. PROCEDURE: Either the image parameter or the region of the display marked by the user is used to obtain a pixel-distribution histogram. The cumulative integral is taken and scaled. This function is applied to the current color tables. MODIFICATION HISTORY: DMS, March, 1988, written. DMS, May, 1990, added BOX_CURSOR. AB, 21 September 1992,renamed from HIST_EQUAL_CT to H_EQ_CT to avoid DOS filename limitations. HIST_EQUAL_CT is still available as a wrapper to this routine under operating systems that can handle longer file names.

NAME: H_EQ_INT PURPOSE: Interactively histogram-equalize the color tables of an image or a region of the display. By moving the cursor across the screen, the amount of histogram equalization is varied. CATEGORY: Image processing. CALLING SEQUENCE: H_EQ_INT, Image ;To histogram equalize from an image. H_EQ_INT ;To histogram equalize from a region. INPUTS: Image: The image whose histogram is to be used in determining the new color tables. If this value is omitted, the user is prompted to mark the diagonal corners of a region of the display. Image MUST be a byte image, scaled the same way as the image loaded to the display. OUTPUTS: No explicit outputs. The result is applied to the current color tables. COMMON BLOCKS: COLORS: The IDL color table common block. SIDE EFFECTS: The current color table is modified. RESTRICTIONS: If a parameter is supplied, it is assumed to be an image that was just displayed. PROCEDURE: Either the image parameter or the region of the display marked by the user is used to obtain a pixel-distribution histogram. The cumulative integral is taken and scaled. This function is applied to the current color tables. A window is created and the histogram equalization function is plotted. A linear ramp is overplotted. Move the cursor from left to right to vary the amount of histogram equalization applied to the color tables from 0 to 100%. Press the right mouse button to exit. MODIFICATION HISTORY: DMS, November, 1989, written. AB, 21 September 1992,renamed from HIST_EQUAL_INT to H_EQ_INT to avoid DOS filename limitations. HIST_EQUAL_INT is still available as a wrapper to this routine under operating systems that can handle longer file names. JWG, 14 December 1992,routine did not restore font.

NAME: HANNING PURPOSE: Window function for Fourier Transform filtering. May be used for both the Hanning and Hamming windows. CATEGORY: Signal, image processing. CALLING SEQUENCE: Result = HANNING(N1) ;For 1 dimension. Result = HANNING(N1, N2) ;For 2 dimensions. INPUTS: N1: The number of columns of the result. N2: The number of rows of the result. Keyword Parameters: ALPHA = width parameter of generalized Hamming window. Alpha must be in the range of 0.5 to 1.0. If Alpha = 0.5, the default, the function is called the “Hanning” window. If Alpha = 0.54, the result is called the “Hamming” window. DOUBLE = Set this keyword to force the computations to be done in double-precision arithmetic. OUTPUTS: Result(i) = 1/2 [1 – COS(2 PI i / N] For two dimensions, the result is the same except that “i” is replaced with “i*j”, where i and j are the row and column subscripts. COMMON BLOCKS: None. SIDE EFFECTS: None. RESTRICTIONS: None. PROCEDURE: Straightforward. MODIFICATION HISTORY: DMS, May, 1987. DMS, Jan, 1994. Added generalized width parameter. CT, RSI, May 2000: Added double-precision support. CT, RSI, August 2001: Changed formula to divide by N rather than N-1. This now agrees with Numerical Recipes in C, 2nd ed.

NAME: HDF_EOS_QUERY PURPOSE: Read the header of an HDF file and report on the number of EOS extensions as well as their names. CATEGORY: Input/Output. CALLING SEQUENCE: Result = EOS_QUERY(File [, Info]) INPUTS: File: Scalar string giving the name of the HDF file to query. Keyword Inputs: None. OUTPUTS: Result is a long with the value of 1 if the query was successful (and the file type was correct) or 0 on failure. Info: (optional) An anonymous structure containing information about the file. This structure is valid only when the return value of the function is 1. The Info structure has the following fields: Field IDL data type Description —– ————- ———– GRID_NAMES String array Names of grids NUM_GRIDS Long Number of grids in file NUM_POINTS Long Number of points in file NUM_SWATHS Long Number of swaths in file POINT_NAMES String array Names of points SWATH_NAMES String array Names of swaths CALLS: *** EOS_QUERY, IS_HDF_EOS RESTRICTIONS: None. EXAMPLE: To retrieve information from the HDF file named “foo.hdf” in the current directory, enter: result = EOS_QUERY(“foo.hdf”, info) IF (result GT 0) THEN BEGIN HELP, /STRUCT, info ENDIF ELSE BEGIN PRINT, ‘HDF file not found or file does not contain EOS extensions.’ ENDELSE MODIFICATION HISTORY: Written December 1998, Scott J. Lasica

NAME: HDF_EXISTS PURPOSE: Test for the existence of the HDF library CATEGORY: File Formats CALLING SEQUENCE: Result = HDF_EXISTS() INPUTS: None. KEYWORD PARAMETERS: None. OUTPUTS: Returns TRUE (1) if the HDF data format library is supported. Returns FALSE(0) if it is not. EXAMPLE: IF hdf_exists() EQ 0 THEN Fail,”HDF not supported on this machine” MODIFICATION HISTORY Written by: Joshua Goldstein, 12/21/92 Modified by: Steve Penton, 12/27/95 Scott Lasica 8/4/99

NAME: HILBERT PURPOSE: Return a series that has all periodic terms shifted by 90 degrees. CATEGORY: G2 – Correlation and regression analysis A1 – Real arithmetic, number theory. CALLING SEQUENCE: Result = HILBERT(X [, D]) INPUT: X: A floating- or complex-valued vector containing any number of elements. OPTIONAL INPUT: D: A flag for rotation direction. Set D to +1 for a positive rotation. Set D to -1 for a negative rotation. If D is not provided, a positive rotation results. OUTPUTS: Returns the Hilbert transform of the data vector, X. The output is a complex-valued vector with the same size as the input vector. COMMON BLOCKS: None. SIDE EFFECTS: HILBERT uses FFT() so this procedure exhibits the same side effects with respect to input arguments as that function. PROCEDURE: A Hilbert transform is a series that has had all periodic components phase-shifted by 90 degrees. It has the interesting property that the correlation between a series and its own Hilbert transform is mathematically zero. The method consists of generating the fast Fourier transform using the FFT() function and shifting the first half of the transform products by +90 degrees and the second half by -90 degrees. The constant elements in the transform are not changed. Angle shifting is accomplished by multiplying or dividing by the complex number, I=(0.0000, 1.0000). The shifted vector is then submitted to FFT() for transformation back to the “time” domain and the output is divided by the number elements in the vector to correct for multiplication effect peculiar to the FFT algorithm. REVISION HISTORY: JUNE, 1985, Written, Leonard Kramer, IPST (U. of Maryland) on site contractor to NASA(Goddard Sp. Flgt. Cntr.)

NAME: HIST_2D PURPOSE: Return the density function (histogram) of two variables. CATEGORY: Image processing, statistics, probability. CALLING SEQUENCE: Result = hist_2d(V1, V2) INPUTS: V1 and V2 = arrays containing the variables. May be any non-complex numeric type. Keyword Inputs: MIN1: MIN1 is the minimum V1 value to consider. If this keyword is not specified, then if the smallest value of V1 is greater than zero, then MIN1=0 is used, otherwise the smallest value of V1 is used. MIN2: MIN2 is the minimum V2 value to consider. If this keyword is not specified, then if the smallest value of V2 is greater than zero, then MIN2=0 is used, otherwise the smallest value of V2 is used. MAX1: MAX1 is the maximum V1 value to consider. If this keyword is not specified, then V1 is searched for its largest value. MAX2 MAX2 is the maximum V2 value to consider. If this keyword is not specified, then V2 is searched for its largest value. BIN1 The size of each bin in the V1 direction (column width). If this keyword is not specified, the size is set to 1. BIN2 The size of each bin in the V2 direction (row height). If this keyword is not specified, the size is set to 1. OUTPUTS: The two dimensional density function of the two variables, a longword array of dimensions (m1, m2), where: m1 = Floor((max1-min1)/bin1) + 1 and m2 = Floor((max2-min2)/bin2) + 1 and Result(i,j) is equal to the number of sumultaneous occurences of an element of V1 falling in the ith bin, with the same element of V2 falling in the jth bin. RESTRICTIONS: Not usable with complex or string data. PROCEDURE: Creates a combines array from the two variables, equal to the linear subscript in the resulting 2D histogram, then applies the standard histogram function. EXAMPLE: Return the 2D histogram of two byte images: R = HIST_2D(image1, image2) Return the 2D histogram made from two floating point images with range of -1 to +1, and with 101 (= 2/.02 + 1) bins: f1 = RANDOMN(seed, 256, 256) f2 = RANDOMN(seed, 256, 256) R = HIST_2D(f1, f2, MIN1=-1, MIN2=-1, MAX1=1, MAX2=1, $ BIN1=.02, BIN2=.02) TVSCL, R MODIFICATION HISTORY: Written by: DMS, Sept, 1992 Written DMS, Oct, 1995 Added MIN, MAX, BIN keywords following suggestion of Kevin Trupie, GSC, NASA/GSFC. CT, RSI, May 2001: Corrected MIN, MAX keywords so that the out-of-range values are ignored rather than truncated to be within range. Allow input arrays with negative values.

NAME: HIST_EQUAL PURPOSE: Return a histogramequalized or modified image or vector. CATEGORY: Z1 – Image processing, spatially invariant. CALLING SEQUENCE: Result = HIST_EQUAL(A [, BINSIZE=value] [, /HISTOGRAM_ONLY] [, MAXV=value] [, MINV=value] [, OMAX=variable] [, OMIN=variable] [, PERCENT=value] [, TOP=value] [, FCN=vector]) INPUTS: A: The array to be histogram-equalized. KEYWORD PARAMETERS: BINSIZE: Size of the bin to use. The default is BINSIZE=1 if A is a byte array, or, for other input types, the default is (MAXV-MINV)/5000. HISTOGRAM_ONLY: If set, then return a vector of type LONG containing the cumulative distribution histogram, rather than the histogram equalized array. Not valid if FCN is specified. MAXV: The maximum value to consider. The default is 255 if A is a byte array, otherwise the maximum data value is used. Input elements greater than or equal to MAXV are output as 255. MINV: The minimum value to consider. The default is 0 if A is a byte array, otherwise the minimum data value is used. Input elements less than or equal to MINV are output as 0. OMAX: Set this keyword to a named variable that, upon exit, contains the maximum data value used in constructing the histogram. OMIN: Set this keyword to a named variable that, upon exit, contains the minimum data value used in constructing the histogram. PERCENT: Set this keyword to a value between 0 and 100 to stretch the image histogram. The histogram will be stretched linearly between the limits that exclude the PERCENT fraction of the lowest values, and the PERCENT fraction of the highest values. This is an automatic, semi-robust method of contrast enahncement. TOP: The maximum value of the scaled result. If TOP is not specified, 255 is used. Note that the minimum value of the scaled result is always 0. FCN: The desired cumulative probability distribution function in the form of a 256 element vector. If omitted, a linear ramp, which yields equal probability bins results. This function is later normalized, so its magnitude doesn’t matter, although it should be monotonically increasing. OUTPUTS: A histogram modified array of type byte is returned, of the same dimensions as the input array. If the HISTOGRAM_ONLY keyword is set, then the output will be a vector of type LONG. CALLED BY: idlitvisstreamline__define PROCEDURE: The HISTOGRAM function is used to obtain the density distribution of the input array. The histogram is integrated to obtain the cumulative density-propability function and finally the lookup function is used to transform to the output image. Note: The first element of the histogram is always zeroed to remove the background. CALLS: *** HIST_EQUAL_PCT COMMON BLOCKS: None. SIDE EFFECTS: None. RESTRICTIONS: None. EXAMPLE: Create a sample image using the IDL DIST function and display it by entering: image = DIST(100) TV, image Create a histogram-equalized version of the byte array, IMAGE, and display the new version. Use a minumum input value of 10, a maximum input value of 200, and limit the top value of the output array to 220. Enter: new = HIST_EQUAL(image, MINV = 10, MAXV = 200, TOP = 220) TV, new Perform a linear stretch on the input array between the limits determined by excluding 5% of the lowest values, and 5% of the highest values: NEW = HIST_EQUAL(IMAGE, PERCENT=5) To modify the output histogram to a logarithmic cumulative distribution (i.e. more pixels with lower values): y = alog(findgen(256)+1) ;a log shaped curve TV, Hist_Equal(A, FCN = y) The following example modifies the histogram to a gaussian probability (not cumulative) distribution. This results in most of the pixels having an intensity near the midrange: x = findgen(256)/255. ;Ramp from 0 to 1. y=exp(-((x-.5)/.2)^2) ;Gaussian centered at middle, full ;width at 1/2 max ~ 0.4 ;Form cumulative distribution, transform and display: TVSCL, Hist_Equal(A, FCN = TOTAL(y, /CUMULATIVE)) MODIFICATION HISTORY: August, 1982. Written by DMS, RSI. Feb, 1988, Revised for Sun, DMS. Dec, 1994. DMS. Improved handling offloating/double images with large or small ranges. Default value for MINV is computed, rather than set to 0. Oct, 1996. DMS. Made the handling of MIN=, and MAX= consistent for all data types. July 2000, CT, RSI: Completely rewrote. Now handles new integer types; can use BINSIZE with byte input; added OMAX, OMIN; HISTOGRAM_ONLY now returns LONG array. Aug 2003, CT, RSI: Better error handling. Fix problem with unsigned ints.

NAME: HLS PURPOSE: Make a color table based on the HLS (Hue, Lightness, Saturation) color system. CATEGORY: Z4 – Image processing, color table manipulation CALLING SEQUENCE: HLS, Litlo, Lithi, Satlo, Sathi, Hue, Loops [, Colr] INPUTS: Litlo: Starting lightness, from 0 to 100%. Lithi: Ending lightness, from 0 to 100%. Satlo: Starting saturation, from 0 to 100%. Sathi: Ending stauration, from 0 to 100%. Hue: Starting Hue, from 0 to 360 degrees. Red = 0 degs, green = 120, blue = 240. Loops: The number of loops through the color spiral. This parameter does not have to be an integer. A negative value causes the loops to traverse the spiral in the opposite direction. OUTPUTS: No required outputs. OPTIONAL OUTPUT PARAMETERS: Colr: A (256,3) integer array containing the R, G, and B values that were loaded into the color tables. Red = colr(*,0), green = colr(*,1), blue = colr(*,2). COMMON BLOCKS: COLORS: Contains the red, green, and blue vectors on exit. SIDE EFFECTS: The color tables are loaded. RESTRICTIONS: None. PROCEDURE: Adapted from program on page 619, Fundamentals of Interactive Computer Graphics, Foley and Van Dam. Using the input parameters, a spiral through the double- ended HLS cone is traced. Points along the cone are converted from HLS to RGB. MODIFICATION HISTORY: Written, DMS, Jan, 1983. Changed common block, dms, 4/1987.

NAME: HSV PURPOSE: Make a color table based on the HSV (Hue, Saturation, and Value) color system. CATEGORY: Z4 – Image processing, color table manipulation CALLING SEQUENCE: HLS, Vlo, Vhi, Satlo, Sathi, Hue, Loops [, Colr] INPUTS: Vlo: Starting value, from 0 to 100%. Vhi: Ending value, from 0 to 100%. Satlo: Starting saturation, from 0 to 100%. Sathi: Ending saturation, from 0 to 100%. Hue: Starting Hue, from 0 to 360 degrees. Red = 0 degs, green = 120, blue = 240. Loops: The number of loops through the color spiral. This parameter does not have to be an integer. A negative value causes the loops to traverse the spiral in the opposite direction. OUTPUTS: No required outputs. OPTIONAL OUTPUT PARAMETERS: Colr: A (256,3) integer array containing the R, G, and B values that were loaded into the color tables. Red = colr(*, 0), green = colr(*, 1), blue = colr(*, 2). COMMON BLOCKS: COLORS: Contains the red, green and blue color vectors on exit. SIDE EFFECTS: The color tables are loaded. RESTRICTIONS: None. PROCEDURE: Adapted from a program on page 616, Fundamentals of Interactive Computer Graphics, Foley and Van Dam. Using the input parameters, a spiral through the single-ended HSV cone is traced. Points along the cone are converted from HLS to RGB. MODIFICATION HISTORY: Written, DMS, Jan, 1983. Added common block COLORS, DMS, Dec, 1983 and Apr, 1987.

BROWSER

H5_BROWSER

The H5_BROWSER function presents a graphical user interface for viewing and reading HDF5 files. The browser provides a tree view of the HDF5 file or files, a data preview window, and an information window for the selected objects. The browser may be created as either a selection dialog with Open/Cancel buttons, or as a standalone browser that can import data to the IDL main program level.

Note

This function is not part of the standard HDF5 interface, but is provided as a programming convenience.

Syntax

Result = H5_BROWSER([Files] [, /DIALOG_READ] )

Return Value

If the DIALOG_READ keyword is specified then the Result is a structure containing the selected group or dataset (as described in the H5_PARSE function), or a zero if the Cancel button was pressed. If the DIALOG_READ keyword is not specified then the Result is the widget ID of the HDF5 browser.

Arguments

Files

An optional scalar string or string array giving the name of the files to initially open. Additional files may be opened interactively. If Files is not provided then the user is automatically presented with a File Open dialog upon startup.

Keywords

DIALOG_READ

If this keyword is set then the HDF5 browser is created as a modal Open/Cancel dialog instead of a standalone GUI. In this case, the IDL command line is blocked, and no further input is taken until the Open or Cancel button is pressed. If the GROUP_LEADER keyword is specified, then that widget ID is used as the group leader, otherwise a default group leader base is created.

All keywords to WIDGET_BASE, such as GROUP_LEADER and TITLE, are passed on to the top-level base.

Examples

The following example starts up the HDF5 browser on a sample file:

Reading an HDF file with C, FORTRAN, Python, IDL, MATLAB and R – ICARE Data and Services Center

ICARE HDF reader

obtaining informations about the structure of an HDF file

extracting SDS data

reading SDS and file attributes

calibrating data

Download

Usage

Language

HDF4

PYTHON

# -*- coding: utf-8 -*- #!/usr/local/bin/python from pyhdf import SD # HDF file and SDS names FILE_NAME = “MYD04_L2.A2013060.1300.051.2013062021359.hdf” SDS_NAME = “Optical_Depth_Land_And_Ocean” # open the hdf file hdf = SD.SD(FILE_NAME) # select and read the sds data sds = hdf.select(SDS_NAME) data = sds.get() # get dataset dimensions nrows, ncols = data.shape # data.shape: (3712, 3712) in the SEVIRI AER-OC example, 203 print data.shape # data.shape: (3712, 3712) in the SEVIRI AER-OC example, (203, 135) i=200 # row index j=125 # col index print data[200,125] # Terminate access to the data set sds.endaccess() # Terminate access to the SD interface and close the file hdf.end()

C

#include “mfhdf.h” const char file_name[100]=”MYD04_L2.A2013060.1300.051.2013062021359.hdf”; char sds_name[100]=”Optical_Depth_Land_And_Ocean”; main( ) { /************************* Variable declaration **************************/ int32 sd_id, sds_id, sds_index, rank, data_type, n_attrs; int32 dim_sizes[2]; int32 start[2]; int32 stride[2]; int32 edges[2]; int32 nrows, ncols; void *data; int i, j, k; /********************* End of variable declaration ***********************/ /* Open the file */ sd_id = SDstart (file_name, DFACC_READ); /* Get the index of the given data set SDS_NAME */ sds_index = SDnametoindex(sd_id, sds_name); sds_id = SDselect (sd_id, sds_index); /* Get the name, rank, dimension sizes, data type and number of attributes for a data set */ SDgetinfo(sds_id, sds_name, &rank, dim_sizes, &data_type, &n_attrs); nrows = dim_sizes[0]; ncols = dim_sizes[1]; start[0] = 0; /* index of first row to read */ start[1] = 0; /* index of first column to read */ edges[0] = dim_sizes[0]; /* the number of rows to read */ edges[1] = dim_sizes[1]; /* the number of cols to read */ stride[0] = 1; stride[1] = 1; data = (void *)malloc(nrows*ncols*DFKNTsize(data_type)); /* Read entire data into data array The array stride (i.e. step) specifies the reading pattern along each dimension. For example, if one of the elements of the array stride is 1, then every element along the corresponding dimension of the data set will be read. If one of the elements of the array stride is 2, then every other element along the corresponding dimension of the data set will be read, and so on. Specifying stride value of NULL in the C interface or setting all values of the array stride to 1 in either interface (C or FORTRAN) specifies the contiguous reading of data. */ SDreaddata (sds_id, start, stride, edges, (VOIDP)data); i=200; // row index j=125; // col index k = i*ncols + j; printf (“%d

“, ((int16*) data)[k]); /* Terminate access to the data set. */ SDendaccess (sds_id); /* Terminate access to the SD interface and close the file. */ SDend (sd_id); free(data); };

FORTRAN

program read_data implicit none C Parameter declaration integer DFACC_READ, DFNT_INT32 parameter(DFACC_READ = 1, DFNT_INT32 = 24) integer MAXDIM parameter(MAXDIM=100000) C Function declaration integer sfstart, sfn2index, sfselect, sfginfo, sfrdata integer sfendacc, sfend C ************************** Variable declaration C ************************** character*100 sds_name character*100 file_name integer sd_id, sds_id, sds_index, status integer rank, data_type, n_attrs integer dim_sizes(32), start(32), edges(32), stride(32) integer*2 data(MAXDIM) integer nrows, ncols, i, j, k C ************************** End of variable declaration C ************************** file_name=”MYD04_L2.A2013060.1300.051.2013062021359.hdf” sds_name=”Optical_Depth_Land_And_Ocean” C Open the file sd_id = sfstart(file_name, DFACC_READ) C Get the index of the given data set SDS_NAME sds_index = sfn2index(sd_id, sds_name) sds_id = sfselect(sd_id, sds_index) C Get the name, rank, dimension sizes, data type and number of C attributes for a data set print*, sds_id status = sfginfo(sds_id,sds_name,rank,dim_sizes,data_type,n_attrs) nrows = dim_sizes(2) ncols = dim_sizes(1) C Define the location, pattern, and size of the data set start(1) = 0 ! index of first row to read start(2) = 0 ! index of first column to read edges(1) = dim_sizes(1) ! the number of cols to read edges(2) = dim_sizes(2) ! the number of rows to read stride(1) = 1 ! to read entire data stride(2) = 1 C Read entire data into data array. The array stride (i.e. step) specifies C the reading pattern along each dimension. C The sfrdata routine reads numeric scientific data and sfrcdata reads C character scientific data status = sfrdata(sds_id, start, stride, edges, data) i=201 ! row index j=126 ! col index k = (i-1)*ncols + j print*, data(k) C Terminate access to the data set status = sfendacc(sds_id) C Terminate access to the SD interface and close the file status = sfend(sd_id) end

IDL

PRO read_hdf FILE_NAME = “MYD04_L2.A2013060.1300.051.2013062021359.hdf” ; nrows=203, ncols=135 SDS_NAME = “Optical_Depth_Land_And_Ocean” ; Open the file sd_id = HDF_SD_START(FILE_NAME, /read) ; Find the index of the sds to read using its name sds_index = HDF_SD_NAMETOINDEX(sd_id, SDS_NAME) ; Select it sds_id = HDF_SD_SELECT(sd_id, sds_index) ; Get data set information including dimension information HDF_SD_GetInfo, sds_id, name = SDS_NAME, natts = num_attributes, ndim = num_dims, dims = dim_sizes nrows = dim_sizes[1] ncols = dim_sizes[0] ; Define subset to read. start is [0,0]. start = INTARR(2) ; the start position of the data to be read start[0] = 0 start[1] = 0 edges = INTARR(2) ; the number of elements to read in each dimension edges[0] = dim_sizes[0] edges[1] = dim_sizes[1] ; Read the data : you can notice that here, it is not needed to allocate the data array yourself HDF_SD_GETDATA, sds_id, data, start = start, count = edges i=200 ; row index j=125 ; col index PRINT, FORMAT = ‘(I,” “,$)’, data[j,i] ; 65 PRINT, “” ; end access to SDS HDF_SD_ENDACCESS, sds_id ; close the hdf file HDF_SD_END, sd_id END

MATLAB

file_name = ‘MYD04_L2.A2013060.1300.051.2013062021359.hdf’; sds_name = ‘Optical_Depth_Land_And_Ocean’ ; % Open a buffer for hdf file sd_id = hdfsd( ‘start’, file_name, ‘rdonly’) % Get the number of datasets and global attributes [num_datasets, num_global_attr, status] = hdfsd(‘fileinfo’,sd_id) % Get the sds identifier from the dataset named sds_name sds_index = hdfsd(‘nametoindex’, sd_id, sds_name) sds_id = hdfsd(‘select’, sd_id, sds_index) % Get the name, number of dimensions, size of each dimension, data type and number of %attributes of a dataset [sds_name, sds_num_dim, sds_dim, sds_data_type, sds_num_attr] = hdfsd(‘getinfo’, sds_id) % Read the dataset identified by sds_id sds_start_vector = zeros(1, sds_num_dim) %the position to begin reading sds_stride = [] %the interval between value to read sds_end_vector = sds_dim %the end of dimension to read [sds_data, status] = hdfsd(‘readdata’, sds_id, sds_start, sds_stride, sds_edges) ; % Read the dataset named sds_name data = hdfread (file_name, sds_name) ; i = 200 ; %row index j = 125 ; %col index disp([data(i, j)]) ; % Or, disp([sds_data(j, i)]) ; % Close access to the dataset status = hdfsd(‘endaccess’,sds_id); % Close access to the hdf file status = hdfsd(‘end’,sd_id);

HDF5

Fortran

program sds_info use hdf5 implicit none ! Variables declaration CHARACTER*100 :: file_name CHARACTER*100 :: sds_name CHARACTER*100 :: gr_name CHARACTER*100 :: attr_name INTEGER(HID_T):: file_id, gr_id, dset_id, attr_id INTEGER :: status, error, storage, nlinks,max_corder, attr_num REAL, DIMENSION(10,10) :: dset_data, data_out INTEGER, DIMENSION(1:10) :: buf INTEGER(HSIZE_T), DIMENSION(2):: data_dims INTEGER(HSIZE_T), DIMENSION(1) ::dims ! Variables initalization file_name = “example.h5” sds_name = “/g1/g1.1/dset1.1.1” gr_name = “g3” attr_name = “attr1” ! Initialize the interface call h5open_f(status) ! Open an hdf5 file call h5fopen_f (file_name, H5F_ACC_RDWR_F, file_id, status) ! Get the number of global attributes call h5aget_num_attrs_f(file_id, attr_num, error) print *, “attr_num “,attr_num ! Open a group call h5gopen_f(file_id, gr_name, gr_id, status ) ! Get information about a group ! **storage: type of storage for links in the group: Compact storage, Indexed storage or Symbol tables ! **nlinks : number of links in the group ! **max_corder : current maximum creation order value for group call h5gget_info_f(gr_id, storage, nlinks,max_corder, error) print*,”storage, nlinks,max_corder”, storage, nlinks,max_corder ! Open a dataset call h5dopen_f(file_id, sds_name, dset_id, error) ! Get the number of attributes call h5aget_num_attrs_f(dset_id, attr_num, error) print *, “attr_num “,attr_num ! Read the dataset call h5dread_f(dset_id, H5T_NATIVE_REAL, data_out, data_dims, error) print *, “data_out “,data_out(2,2) ! Open an attribute call h5aopen_f(file_id, attr_name, attr_id, error) ! Read an attribute call h5aread_f(attr_id, H5T_NATIVE_INTEGER, buf, dims, error) print *, “buf “, buf ! Terminate access to the group call h5gclose_f(gr_id, error) ! Terminate access to the dataset call h5dclose_f(dset_id, error) ! Terminate access to the file call h5fclose_f(file_id, error) ! Close FORTRAN interface. call h5close_f(status) end program sds_info

MATLAB

file_name = ‘/matlab/R2012a/toolbox/matlab/demos/example.h5’; sds_name = ‘/g4/lon’ ; % Open hdf file in read-only mode sd_id = H5F.open(file_name); % Get file info by file id file_info = H5F.get_info(sd_id) % Get global info of the file info = h5info(file_name) % Display the structure of the file h5disp(file_name); % Open a group g_id = H5G.open(sd_id,’/g2′); % Get group info group_info = H5F.get_info(g_id) % Open sds by name sds_id = H5D.open(g_id,sds_name); % Get global info of a sds sds_info = h5info(file_name,’/g4/time’) % Display the structure of a sds h5disp(file_name, sds_name); % Get attributes info of a sds attval = h5readatt(file_name, ‘/g4/lon’,’units’) % Read all data of a sds data = H5D.read(sds_id) % Read the first 5-by-3 subset sds_data= h5read(file_name, ‘/g4/world’, [1 1], [5 3]) ; disp(sds_data); %Close sds access H5D.close(sds_id); %Close group access H5G.close(g_id); % Close access to the hdf file H5F.close(sd_id);

R

#To install the package rhdf5 , you need a current version (>2.15.0) of R #After installing R you can run the following commands from the R command shell to install the bioconductor package rhdf5. #source(“http://bioconductor.org/biocLite.R”) #biocLite(“rhdf5”) library(rhdf5) filename = “/home/Projets/R/example.h5″ # Read data of all datasets of a group h5read(filename,”g2″) # Read a dataset h5read(filename,”g2/dset2.2”) # Display the contents of main groups h5ls(filename, FALSE) # Open a hdf5 file fid <-H5Fopen(filename) # Get the number of global attributes H5Oget_num_attrs(fid) # Open the object "g2" gid <- H5Oopen(fid, "g2") # Get group info H5Gget_info(gid) #or H5Gget_info_by_name(fid, "g2") # Get the number of group attributes H5Oget_num_attrs(gid) # or H5Oget_num_attrs("g2") # Open a dataset did <- H5Dopen(fid, "g2/dset2.2") # Get dataspace dataspace <- H5Dget_space(did) dataspace # Close dataspace access H5Sclose(dataspace) #Close dataset access H5Dclose(did) # Close group access H5Oclose(gid) # Close hdf5 file access H5Fclose(fid) ICARE has developed a package of libraries written in Fortran77, Fortran90, C, Python, IDL, MATLAB and R They mainly contain functions for :Source can be downloaded here :Package can be browsed here :See the the README files of the desired language for a detailed description of the way to use the library Examples In all languages directories, some real examples of usage of the library are available in the ./example directory. This is the place to begin for a quick start

HDF5 Tools in IDL

Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are Seth Stephens-Davidowitz

(3.5/5)

Free

키워드에 대한 정보 idl read hdf5

다음은 Bing에서 idl read hdf5 주제에 대한 검색 결과입니다. 필요한 경우 더 읽을 수 있습니다.

이 기사는 인터넷의 다양한 출처에서 편집되었습니다. 이 기사가 유용했기를 바랍니다. 이 기사가 유용하다고 생각되면 공유하십시오. 매우 감사합니다!

사람들이 주제에 대해 자주 검색하는 키워드 4/10- HDF5 with Python: How to Read HDF5 Files

  • hdf5 with python
  • hdf5 tutorial
  • hierarchical data format
  • read hdf5 file with python

4/10- #HDF5 #with #Python: #How #to #Read #HDF5 #Files


YouTube에서 idl read hdf5 주제의 다른 동영상 보기

주제에 대한 기사를 시청해 주셔서 감사합니다 4/10- HDF5 with Python: How to Read HDF5 Files | idl read hdf5, 이 기사가 유용하다고 생각되면 공유하십시오, 매우 감사합니다.

See also  창세기 3장 24절 | [데일리큐티:나무향기] 2020.03.12 - 선악을 아는 일 - 창세기 3장 22~24절 모든 답변