-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/better hdf5 #83
Open
staleyLANL
wants to merge
92
commits into
develop
Choose a base branch
from
feature/better-hdf5
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some files aren't finished yet, and will be uploaded in a few days. So, this particular update won't build!! Added broad infrastructure for HDF5 handling... ...Wrapper class ...Assignment operators ...convert() functions ...Constructors ...read() functions ...write() functions Etc. Completed many (not all) HDF5 capabilities. Very basic HDF5 write needs to be smarter (not string-only). HDF5 reading still needs work. Touchups are needed here and there throughout the new HDF5 code. In particular, we need smart, non-string handling. Small tweaks were made to XML and JSON material, for consistency. Some comment changes and other odds and ends.
To be used for non-GNDS products, e.g. NDI3. When ProjectDir is given, namespaces and various other constructs are done differently. Made miscellaneous small improvements in various other files. For example: direct read() and write() in Component, aligning with Node's. So: autogenerated classes have those read()s and write(); there's no need for a user to explicitly go through Node.
Won't compile right now, but wanted to get this in.
For [optional] vector child-node fields, there's a setter that takes an element to push_back. If the optional has no value, it's given one first. Experience showed that something like this would be helpful. Updated code generator accordingly. Also, combined the default and "from fields" constructors in generated classes. This makes the generated classes shorter, and also, based on our experience with a prototype data format, proves to make object construction simpler. A generated class might have, for instance, a few metadata, followed by one or more vector or optional-vector child nodes. It's convenient for someone to initialize the object with metadata, and then to add vector elements individually later. This change helps make this process simpler and shorter. I also implemented another feature that experience with a prototype autogenerated set of classes suggested would be helpful. Before, getters with index or label looked for *metadata* named "index" or "label". Now, if no metadatum called "index" exists in the type of the vector's element (or any of its alternatives, if it's a variant), then index is interpreted to be a regular vector index, as in the "[]" operator. Note that someone who uses the code generator must be aware that the interpretation of (index) getters therefore depends fundamentally on whether or not the relevant vector element type has, or doesn't have, an "index" metadatum. This change means we essentially get an additional feature for free, as (index) previously would just fail unless an "index" metadatum were found. Removed a "GNDS"-centric comment in Component prettyprinted output. The comment probably wasn't particularly informative anyway. Updated test codes to reflect this change.
Regenerated the prototype codes. The changes to these also reflect other work I'd done recently on the code generator, as they hadn't been regenerated for a while.
That is to say, classes that have no metadata or child nodes, but still have data, e.g. with <foo>1.2 3.4 5.6</foo>. OCD'd a cmake-related file. Some more work regarding HDF5 format. Still not entirely complete. I think ".hdf"/".HDF" aren't used as HDF5 file extensions. They need a '5'. Made some diagnostic-message terminology more consistent.
Isn't very smart for output; just keeps everything as strings. That will be remedied at a later time. Was tested for functionality on entire set of GNDS v1.9 files. All converted. ...original total XML size: 2.9G ...new total HDF5 size: 21G So, a little over a factor of 7 larger. :-/ Miscellaneous other code changes were put in place during this work. Made a few diagnostic messages, and other minor constructs here and there, more consistent with one another. Added a convert() that proved to be needed for a certain disambiguation. This was noticed while working on some HDF5 input capabilities.
The former was taken from GNDS, but the latter is much more descriptive.
This is the first of a few simplifications I've been planning to do with the code base. Having `Child<>` objects contain a flag, indicating "is a node of this name allowable as a top-level GNDS node," originally seemed like a good idea. With our later decision to autogenerate GNDS Standard Interfaces from GNDS specs, having such a capability in the Core Interface really isn't necessary. The Core Interface can be safely generic, uncluttered with the capability of checking such a thing. Users will prefer our GNDS Standard Interfaces, which, having been generated from GNDS specifications, will be designed so that GNDS hierarchies will have the correct structure. Our Code Generator is also being used to design another (non-GNDS) library, and more such uses may follow. Checking if a root node has a valid *GNDS* name clearly wouldn't be wanted for other libraries. For this reason, we'd already switched off the check by default. At this point, however, we don't think that having even the ability to check such a thing, in the Core Interface, is worth the extra clutter that it creates in the code base.
The removed material was for a very old capability that was intended to help users make Meta<> and Child<> objects. I hadn't given it much in the way of capabilities yet, but then it was all superseded by the ability to easily manipulate `Meta<>` and `Child<>` objects, e.g. with constructs such as `int{}/Meta<>(...)` to change the type associated with the `Meta<>` object. The old keyword builder stuff never got much use, didn't have much to offer, and simply isn't needed any longer -- better capabilities exist, and in fact have existed for a long time. I just hadn't gotten around to removing the old code. Removing it means a bit less code, one less executable to build in the test suite, and nothing really lost.
Gets rid of the annoying compiler warning about `tmpnam()`. The new formulation required a number of changes in the logic here and there.
Made certain HDF5-related operations more efficient.
A while back, I renamed class BodyText to class BlockData. It turns out that the "body [text]" terminology still appeared here and there. This PR basically completes the terminology change. I renamed relevant constructs, and removed "body" terminology at least in regards to block data.
That fixed a problem with certain initializations being very time-consuming. Properly split out and clarified "write" versus "print". Removed some old debug messages. Changed some terminology. Trimmed some unused stuff from the KeyTuple (formerly KeywordTup) class.
Fix (I hope) error in g++ compilation.
I'll write much more in an upcoming pull request.
An earlier branch introduced the ability to write HDF5 files in different ways, according to two boolean flags: *reduced* and *typed*. This branch enhances our JSON capabilities in the same manner, reflecting, with JSON, an analog of the options we have for HDF5. The above was done in respect to GNDStk's general philosophy of making its support of different file types as consistent with one another as reasonably possible. Similarly to HDF5, the JSON class now has two static booleans: "reduced" and "typed". The "reduced" flag tells the JSON writer whether or not to reduce -- basically, to fold into a shorter and simpler representation -- certain special constructs that GNDStk's Node uses internally in order to handle what are called cdata, pcdata, and comment nodes in XML. Basically, if reduced == true, then we make this simplification, which shortens the JSON output. If reduced == false, then the JSON will closely reflect what Node uses internally. The "typed" flag tells the JSON writer whether or not it should use our type-guessing code to guess whether "strings" in certain contexts (in particular, metadata values and pcdata) actually contain what look like numbers. Examples: "12" looks like an int, "321 476" looks like a vector of ints, "3.14" looks like a double, and "3.14 2.72 1.41" looks like a vector of doubles. If typed == false, we use the original strings, with no type guessing. If typed == true, we apply the type guesser, and, where appropriate, get JSON numbers in place of strings in the JSON output. This is a work in progress, with a couple of things that still need doing... Still to do: modify the JSON *reading* code so that it recognizes the various different ways that the JSON *writing* code writes things, and can reliably reverse the writing process and recover a GNDStk Node. Also still to do: at present, the nlohmann JSON library, which we use under the hood, doesn't provide a way to write JSON numbers (unquoted, as opposed to quoted JSON strings) beginning from an existing string representation that we might provide for the number. Consider this discussion that I started: nlohmann/json#3460 This is relevant to us because GNDStk provides very fine control over exactly how numbers, in particular floating-point numbers, are formatted, in terms of the number of significant digits, fixed vs. scientific form, etc. At the time of this writing, if we use typed == true, so that certain strings that look like numbers are written as numbers, the numbers will, unfortunately, be formatted by the JSON library itself. We'd like to have the capability of writing an original string (one that we've already determined looks like a number!) - but writing it as a JSON number (so, without quotes) rather than a JSON string. In this commit, we also Modified an old Tree/ test that depended on the previous default (but still available, via appropriate flags, even though no longer the default) JSON writing behavior. And, we changed a variable name in json2class.cpp. We refactored detail-node2hdf5.hpp (for HDF5) considerably, and then reflected this in the modified detail-node2json.hpp (for JSON) that we're primarily working on. And, we've enhanced the JSON test code so that it tries out the new flags. This is a work in progress as well. In order to finish it, we need to finish the ability to read JSON files that were written not just in our original manner, but with any variations of the above-described flags.
Also some small tweaks and comment changes here and there.
Small improvements.
…into feature/better-json
Some tweaks to HDF5-related code. Extended and improved some comments, to clarify things.
Additional tests for those, too.
There are still some instances of the strings in question, typically appearing amongst other text inside of string literals. At some point I'll look into fixing up those cases too, but that's not a priority right now.
Updated the HDF5 test suite to ensure that we test the modifications.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.