Datapages, Inc.Print this page

Reservoir Characterization And The Computing Environment – What You Need To Know To Simplify Your Life

Eric Hatleberg, Jay Hollingsworth
Schlumberger Information Solutions, Houston, Texas

 

Carbonate Reservoir Characterization is a distinctive endeavor because the goal is a shared model that is familiar and useful across discipline boundaries. A model is constructed from data provided by Geologists, Geophysicists, Petrophysicists, Core Analysts and Engineers, working more or less in collaboration. Each perspective can have different and divergent terminology, results and goals that confound a meaningful synthesis. On top of that, the software tools we use to describe and analyze reservoirs seem to control our work patterns more than we control them. A better understanding of how interpretation systems work and mature will help get back to a situation where we control the functionality of our software tools and are not invoking black box solutions. A series of questions helps establish a set of guidelines for more effective reservoir modeling studies. These questions include; What do you plan to do? How are you going to convey your results? What data will you exchange? How can you extend application functionality? How can you direct change in the system? As always, the guiding principle is clear communication. Solutions come from a basic understanding of; software architecture and integration, structured vs. unstructured data, exchange formats, application extensibility and establishing standards.

What you plan to do determines your software architecture and integration needs. Simple tasks can be completed simply. With increasing complexity solutions become more complicated. The complexity of interpretation systems is illustrated by issues related single vs. multiple use. Basic drafting functionality is an important but easily attained single use feature. For example, a single user application can draw and convey a useful stratigraphic column. Raising the level of functionality to multiple users involves saving, sharing, attaching supporting information, retrieving and modifying the underlying data among a workgroup or with the general public. This is more difficult. Multiple use leads to concerns about database and application integration. It also addresses issues involving accessibility and data security. Therefore the work requirements, whether single or multiple user, must be reflected in the overall system architecture.

How you are going to convey your results determines your reliance on structured (database) vs. unstructured (text) data in your final report. Traditionally, study results have been documented in hardcopy reports. Now, digital copies of documents add the ability to search text and browse for keywords. This opens unstructured data to new uses. Ultimately an author can choose to reveal an increasing amount of the underlying data in the form of structured information that is available for re-use. Graphs are examples of data that can be made conveniently accessible in widely available spreadsheet applications. More complex datatypes include digital core photos, well logs, or horizon grids. They require specific applications for loading, display or translation but provide powerful opportunities for collaboration and incorporation of results in future work.

What data you will exchange determines your need for specific exchange formats. Exchange formats allow for a file based transfer of a limited set of data. It is free of the constraints of the host and target system making it concise and portable. However these concentrated particles of data generally mean there is some data loss in the transfer. In this way the exchange format becomes the lowest common denominator for all saved data. Exchange formats are not created equal and many are inconsistently used. The secondary effects is that lost data must be added back or cleaned up either during or after the load process. 

Reference values will determine your ability to verify data and extend application functionality. Software systems are designed with varying degrees of elasticity. Some are rigid and do not allow modification while others are flexible and provide for tailored use and potential for evolving functionality. One mechanism open to user control for increasing functionality is loading of standard data values for reference. This is the mechanism that provides for the description of many rock properties. An example is the description of porosity. Loaded values for porosity classification often are limited to fabric selective types like intergranular or mouldic. Declaring non-fabric selective porosities requires expanding the set of values for specific properties and perhaps defining their source of the classification. Controlling the set of reference values used in a system is a way to invoke consistency within that system. This mechanism may also be applied to enhancing the capabilities of a system when it adds function variables, parameters and algorithms.

Establishing standards at local, organizational and industry levels simplifies complex reservoir characterization problems. Standards span a continuum from consistent work habits by an individual, to an agreed set of rules and best practices within a work group up to an established and controlled set of formal methods such as in the Code of Stratigraphic Nomenclature. In this way standards are layered, ranging from internal or external. Awareness of the standards that we utilize in our software system is the first step to their understanding. For many people that will be sufficient. Accepted standards support clear communication about agreed and recognized scientific concepts. Individuals can contribute to efforts to realize these concepts in their software systems directly with their vendors by participation in Customer Relation Boards or by working through their professional organizations like AAPG, or SPWLA or in combination with other industry groups.