Technische Implementation von CERA Hannes Thiemann Max-Planck-Institut für Meteorologie Modelle und Daten zmaw.de Jena, 24. Januar 2007
Inhalt Aufgabe und Motivation Umsetzung Datenbanken Anbindung an das HSM Ausblick
Klimasystem
Klimamodell: Grid
Klimamodell: Auflösung T42 (300 km) T106 (120 km)
Datenmengen Horizontalauflösung des Klimamodells T42: 128 * 64 = 8192 Punkte pro Globalfeld T106: 160 * 320 = Punkte pro Globalfeld Erforderliche Speichereinheiten (GRIB Format) Horizontalfeld (Zugriffseinheit): 17.1 kB (T42) / kB (T106) Unix Filegröße für monatsweise akkumulierte Ergebnisse mit 6 Std. Speicherintervall und 300 2d Variablen (Physikalische Einheit): 616 MB (T42) / 3500 MB (T106) 240 Jahre Modellintegration (Logische Einheit): 1.7 TB (T42) / 10 TB (T106)
Umsetzung Datenbanken
The Winter TopTen Program identifies the worlds largest and most heavily used databases. ….. Congratulations on achieving Grand Prize award winner status (1) in Database Size, Other, All and TopTen Winner status Database Size, Other, Linux;Workload, Other, Linux in Winter Corp.'s 2005 TopTen Program! (1) Grand prizes are awarded for first place winners in the All Environments categories only. WDCC's CERA DB has been identified as the largest Linux DB.
Wintercorp (2005) - DB Size: Scientific, Archive, and other CompanySize (TB) DBMSPlatformSystem Vendor Max-Planck222OracleFederated/SMPNEC USGS/EROS17OracleCentralized/SMPSun USGS/EROS17OracleCentralized/SMPSun HP1NonStop SQLCentralized/MPPHP T-Systems1Oracle RACCentralized/ClusterSun See:
Wintercorp (2005) - DB Size: Data Warehouse CompanySize (TB) DBMSPlatformSystem Vendor Yahoo100OracleCentralized/SMPFujitsu Siemens AT&T 1) 94DaytonaFederated/SMPHP KT IT-Group50DB2Centralized/ClusterIBM LGR25OracleCentralized/SMPHP Amazon25Oracle RACCentralized/ClusterHP See: 1) 330 GB Norm. Data Volume
Oracle 9.2 single instance running on TX7 Enterprise Edition Partitioning Option Advanced Security 24 Tbyte disk attached to database nodes Database size ~260 Tbyte (logical) Database nodes connected to HSM system Data accessible on the internet 800 named users worldwide Daily access 300 GB/Day (average) New data 250 GB/Day (average) CERA: Some Facts
BLOB SX-6 AsAmA 16way AsAmA 4way DXSMDXDB Oracle DB DiskXtender Disk cache Post processing System raw META + Data GFS Environment GFS/ Server AsAmA 4way DXDB... Oracle DB AsAmA 16way GE Network Users GFS/Server DXSN BLOB DXDM Climate Model 1.Climate Model writes raw output (GFS I/O) AP GFS/ Client Post Process Application 2.PP reads raw data (GFS I/O) PP writes data (local I/O) AP OCI Application 3.OCI reads data (Local I/O) AP Local disk Migration & Staging Oracle Application Server 5.Data inquiry (OCI) Oracle AS © NEC Corporation 4.OCI writes BLOB (via networks) Oracle Instance
Level 1 - Interface: Metadata entries (XML, ASCII) + Data Files Level 2 – Interf.: Separate files containing BLOB table data in application adapted structure (time series of single variables) Experiment Description Pointer to Unix-Files Dataset 1 Description Dataset n Description BLOB Data Table BLOB Data Table WDCC Data Topology BLOB DB Table corresponds to scalable, virtual file at the operating system level.
Datenbanken: Aufteilung Metadaten Daten Enterprise User Security OID
Entry Reference Status Distribution Contact Coverage Parameter Spatial Reference Local Adm. Data Access Data Org Tabellen 800 GB
Data matrix of model experiment Model variables Model Run Time 2 D: small BLOBS (16 KB) 3 D: large BLOBS (3 MB) Raw data file: direct model output (0.7 – 16.2 GB) Each columm is one BLOB Table and one META Table in CERA-DB Raw data file inDKRZ Archive
Metadata Table Blob_id Blob_size Start_date Blob_min Blob_max Blob_mean Structure of metadata tables Informationen um Einfache Anfragen ohne Zugriff auf Daten selbst zu beantworten. Konsistenz zu den Daten selbst überprüfen zu können. Qualitätskontrollen durchzuführen. Liegen auf Disk Metadaten erlauben die Abbildung der blob_id auf die wirkliche Modellzeit
BLOB Data Table blob_id blob_data Structure of blob tables Range Partitioning Table Partition 1 Table Partition 2 Table Partition n … blob_id 1.. n blob_id n+1.. m blob_id m+1.. k … Time t 0.. t n Time t n+1.. t m Time t m+1.. t k … Datafile 1 Datafile 2 Datafile n …
Umsetzung: HSM Anbindung an das HSM
TBS - RW Tbl Partition 1 TBS - RW Tbl Partition 2 dxdb TBS - RO Tbl Partition 1 All tablespaces are moved at once to dxdb MigoutMigin
Migout / Migin Migout takes place after files havent been modified for x minutes Only one migout process per dxdb-filesystem Migin takes place immediately after a file is requested. Only parts accessed are retrieved from the backend storage. One migin process per requested file.
dxdb LWM HWM Purging
Criteria for purging Size of datafiles doesnt matter Except: small datafiles can stay on disk Time not modified (easy for read only tablespaces) Time not touched Oracle has the tendency to touch data files quite often Oracle parameter read_only_open_delayed could be an option Prerequisite: 2 copies on tape
Inside the datafile Primary Key Lob Index Table Blob data Header 128k
Frontend versus Backend Header 128k Filesystem FrontendHSM Backend Header 128k Part 1 = 512 MB Part 2 = 512 MB
Retrieving data 4 Header 128k Tape Request
Usage: Downloads
Statistics: Size
Ausblick: Globalmodell T213 (Atmosphäre) Horizontalauflösung des Klimamodells T213: 640 * 320 = Punkte pro Globalfeld T106: 160 * 320 = Punkte pro Globalfeld Erforderliche Speichereinheiten (GRIB Format) Horizontalfeld (Zugriffseinheit): kB (T213) / kB (T106) Unix Filegröße für monatsweise akkumulierte Ergebnisse mit 6 Std. Speicherintervall und 300 2d Variablen (Physikalische Einheit): 14000MB (T213) / 3500 MB (T106) 240 Jahre Modellintegration (Logische Einheit): 40 TB (T213) / 10 TB (T106)
Ausblick: Regionalmodell Auflösung und Datenmengen REMO-UBA-Modellgebiet Orography Auslösung: 10x10 km Datenmenge: 5 TB / 100 Jahre (nur Bodenfelder)
Vielen Dank!