Die Präsentation wird geladen. Bitte warten

Die Präsentation wird geladen. Bitte warten


Ähnliche Präsentationen

Präsentation zum Thema: "ÜBERSICHT ÜBER EMC DATA DOMAIN"—  Präsentation transkript:

Note to Presenter: Present to customers and prospects to provide them with an overview of EMC Data Domain.

2 EMC Data Domain: Marktführerschaft und Innovation
Innovation als Tradition 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 As you can see here, Data Domain systems have a history of leadership and innovation in the deduplication storage category—starting with the first deduplicated NAS storage system back in and spanning to 2012 when EMC introduced the first inline deduplication storage system for compliant archiving. Erstes System zur langfristigen Aufbewahrung für Backup und Archivierung Erster Deduplizierungs- NAS Erste virtuelle Bandbibliothek mit Deduplizierung Größtes Deduplizierungs- Array Schnellster Backup Controller Erste Volume-Replikation mit Deduplizierung Erste Verzeichnis-Replikation mit Deduplizierung Kaskadierte Replikation Erster Deduplizierungs- Nearline-Speicher Erste verteilte Verarbeitung Erste Inline-Deduplizierung für vorgabenkonforme Archivierung

3 Deduplizierung verringert den Speicherkapazitätsbedarf erheblich.
10- bis 30‑fache Reduzierung der gespeicherten Daten im Vergleich zu kompletten oder inkrementellen Backups mit typischen Aufbewahrungs-Policys 10 20 30 1 5 15 Wochen im Einsatz Gespeicherte Daten Deduplizierungsspeicher Herkömmlicher Speicher Backup can be an inefficient process that involves repetitively moving mostly the same data again and again. Deduplication dramatically reduces the amount of redundancy in backup storage and is defined as “the process of finding and eliminating duplication within sets of data.” The deduplication process uses well understood concepts such as cryptographic hashes and content- addressed storage. Only unique segments are stored along with metadata needed to reconstitute the original dataset. This chart gives you an indication of why nine out of 10 respondents to TheInfoPro Wave 15 Storage Study already have, or have plans for, deduplicated backup, and shows one angle on how to look at its impact. There are two points that are important to note here: First, the effect grows over time. The more redundant data that is stored, the greater the degree of deduplication effect between the amount stored by the backup software—the light blue area—and the amount of capacity used, which is the dark blue area on the bottom. Second, these numbers are based on a typical backup policy schedule of a full backup on a weekly basis. The amount of data reduction varies primarily on the basis of that policy and how long that data is kept. So the retention policy will guide the degree of deduplication more than any other factor. One thing is clear—the impact is significant. Note to Presenter: Details of the May 2011 release of TheInfoPro Wave 15 Storage Study can be found at this URL: f1000-enterprises-2011-storage-spend-continues-at-a-strong-pace/. TheInfoPro’s “Technology Heat Index” is widely regarded as effective measure of user “demand” for a technology, and from a vendor’s perspective, a good indicator of the relative size of the market opportunity.

4 Reduzierung/Deduplizierung von Backup-Daten
Zeitlicher Ablauf der Implementierung in großen Unternehmen In den letzten drei Jahren ist der Einsatz von Backups mit Deduplizierung von 15 % auf 48 % gestiegen. Note to Presenter: View in Slide Show mode for hyperlink in footer to work. According to the latest TheInfoPro Wave Storage Study, 48 percent of Fortune 1000 respondents have backup deduplication in use and another 40 percent have it either in pilot, or in their future plans. That’s about nine in 10 respondents, either with deduplication or moving to it, giving deduplication a “Technology Heat Index” rank of 1. In other words, the move is on—from tape- centric backup architecture to disk-centric designed backup based on deduplication technologies. Note to Presenter: Details of the May 2011 release of TheInfoPro Wave 15 Storage Study can be found at this URL: f1000-enterprises-2011-storage-spend-continues-at-a-strong-pace/. TheInfoPro’s “Technology Heat Index” is widely regarded as effective measure of user “demand” for a technology, and from a vendor’s perspective, a good indicator of the relative size of the market opportunity. Quelle: Wave 15 Storage Study – 2. Quartal 2011, veröffentlicht am , Beispiel eines großen Unternehmens; 1. Hj. 2007, n=151; 2. Hj. 2008, n=127; 1. Hj. 2009, n=147; 2. Hj. 2009, n=182; 1. Hj. 2010, n=146; 1. Hj. 2011, n=31;TheInfoPro (www.theinfopro.com)

5 Reduzierung/Deduplizierung von Backup-Daten
Großunternehmen Von EMC sind dreimal so viele Produkte installiert wie vom nächsten Wettbewerber. Note to Presenter: View in Slide Show mode for hyperlink in footer to work. The previous slide showed a chart from TheInfoPro Wave 15 Storage Study that showed deduplication adoption and plans. This chart from the same TheInfoPro study shows EMC’s significant degree of leadership over competitors in the area of backup data reduction/deduplication. Note to Presenter: Details of the May 2011 release of TheInfoPro Wave 15 Storage Study can be found at this URL: f1000-enterprises-2011-storage-spend-continues-at-a-strong-pace/. Quelle: Wave 15 Storage Study – 2. Quartal 2011, veröffentlicht am , Beispiel eines großen Unternehmens, n=31,TheInfoPro (www.theinfopro.com)

6 PBBA (Purpose-Built Backup Appliances)
Offene Systeme und Mainframe Umsätze der Anbieter weltweit für das 1. Halbjahr 2011, Gesamtanteil des PBBA-Markts This was expected to be a $2.7B market in 2011 that hasn’t really been tracked until now. EMC is in a clear leadership position with over 62% market share. Of all the backup market segments, that’s a great place to be, since it’s growing the fastest and will be over a $5B market by This means the market is embracing these solutions. EMC 62 % 1. Halbjahr 2011 Gesamtmarkt 1,2 Mrd. US$ Quelle: Aktualisierte Prognose für den globalen PBBA-Markt (Worldwide Purpose-Built Backup Appliances) 2011–2015: Explosive Growth in 2011 , Dezember 2011, IDC, Dokument Nr

7 Data Domain-Deduplizierungsspeichersysteme ermöglichen…
Längere Aufbewahrungszeiten Längere Aufbewahrung von Backups vor Ort mit weniger Festplatten für schnelles und zuverlässiges erneutes Speichern – kein Einsatz von Bändern für die betriebliche Recovery erforderlich Intelligentere Replikation Die Übertragung ausschließlich deduplizierter Daten über vorhandene Netzwerke ermöglicht eine Bandbreiteneffizienz von bis zu 99 % und eine kosteneffiziente Disaster Recovery. Zuverlässige Recovery Kontinuierliche Fehlererkennung und automatische Fehlerkorrektur, mit deren Hilfe die Daten wiederhergestellt und SLAs (Service-Level-Agreements) eingehalten werden können. Note to Presenter: View in Slide Show mode for animation. Let’s look at what kind of transformational advantages you’ll get from Data Domain. You’ll be able to: Retain backups longer. By reducing data amounts by 10 to 30 times, you can keep backups onsite longer using less disk for fast, reliable restores, and eliminate the use of tape for operational recovery. Replicate smarter. Move only deduplicated data over existing networks for up to 99 percent bandwidth efficiency and cost-effective disaster recovery. Recovery reliably from disk. With continuous fault detection and system self-healing, you can ensure that data is recoverable and easily meet service level agreements. WAN

8 Grundlagen der Deduplizierung
The next section will focus on deduplication fundamentals.

9 Data Domain-Grundlagen
Einfache Integration in vorhandene Umgebung Kontroll-Tier Backup- und Archivierungs- anwendungen EMC Symantec CommVault IBM HP Veeam Quest Ziel-Tier Disaster Recovery Tier CIFS, NFS, NDMP, DD Boost Ethernet Virtual Tape Library (VTL) über Fibre Channel Now I’ll introduce you to the Data Domain storage system and move from the outside in. This is a picture of what you would see in a Data Domain deployment. A Data Domain appliance is a storage system with shelves of disks and a controller. It’s optimized, first to back up and second to archive applications, and supports most of the industry-leading backup and archiving applications. I’ll talk primarily about backup in this discussion, and get to archiving later in the presentation. The list on the left is composed primarily of leading backup applications—not only EMC’s offerings with EMC NetWorker, but also Symantec, CommVault, and so on…even niche vendors like Veeam for VMware. On the way into the storage system, data can pass through either Ethernet or Fibre Channel. With Ethernet, it can use mass protocols and NFS or CIFS; it can also use optimized protocols or products, such as Data Domain Boost, a custom integration with leading backup applications. After the data is stored and it’s deduplicated during the storage process, it can replicate for disaster recovery. Only the compressed deduplicated unique data segments that have been filtered out through the right process on the target tier are replicated. Replikation DD890-Appliance DD890-Appliance

10 Datendeduplizierung: Technologieüberblick
Mehr Backups bei geringerem Platzbedarf speichern Freitag komplettes Backup A B C D E F G Backup- Geschätzte Daten Logisch Reduzierung Physisch FREITAG KOMPLETTES 1 TB 2- bis 4-fach 250 GB BACKUP Mo inkrementelles Backup A B H Montag inkrementelles 50 GB 7- bis 10-fach 5 GB Backup A technology overview of data deduplication will help illustrate how you can store more backups in a smaller footprint with Data Domain. Note to Presenter: Click now in Slide Show mode for animation. On Friday, the backup application initiates the first full backup of 1 TB, but only 250 GB is stored on Data Domain. This occurs because as the data stream is coming into Data Domain, the system is deduplicating before storing data to disk. On average this results in a two- to four-times reduction in data on a first full backup. Over the course of the week, 50 GB daily incremental backups result in a seven- to 10-times reduction and only require 5 GB to be stored. As the graphic on the left shows, during the week incremental backups contain data that was already protected from the first full backup. Finally, on the second Friday, the second full backup contains almost all redundant data. Therefore of the 1 TB backup dataset, only 18 GB needed to be stored. In total over the course of a week, 2.2 TB of data was backed up to Data Domain, but the system only required 288 GB of capacity to protect this dataset. Overall, this resulted in a 7.6-times reduction in one week. Di inkrementelles Backup C B I Dienstag inkrementelles 50 GB 7- bis 10-fach 5 GB Backup Mi inkrementelles Backup E G J Mittwoch inkrementelles 50 GB 7- bis 10-fach 5 GB Backup Do inkrementelles Backup A C K Donnerstag inkrementelles 50 GB 7- bis 10-fach 5 GB Backup Zweiter Freitag komplettes Backup B C D E F L G H Zweiter FREITAG KOMPLETTES BACKUP 1 TB 50- bis 60-fach 18 GB GESAMT 2,2 TB 7,6-fach 288  GB A B C D E F G H I J K L

11 Aufbewahrung: Mit weniger Aufwand mehr Daten länger speichern
Mehr als ein Jahr Aufbewahrung in 3-HE-Data Domain-Deduplizierungsspeicher Backup- Kumulativ Geschätzte Physisch Daten Logisch Reduzierung Erstes komplettes 1 TB 4-fach 250 GB Backup 1. Woche 7. April 2,2 TB 8-fach 288  GB Note to Presenter: View in Slide Show mode for animation. If you extend this scenario out to four months of backups, you’ll see how you could retain more backups longer with less disk by eliminating redundant data from your backup stream and reduce the necessary amount of backup storage. By doing this, you’ll be able change the economics of using disk, eliminating or minimizing the use of tape for operational recovery. This chart shows the dramatic reduction in storage required for backups. Just like the previous slide, the first column is the type of backup data—the first full backup, full backups accumulated after week one, week two, and all the way through to month four in a four-month retention policy. The cumulative logical column is next and shows you how much data has been protected and would be stored without deduplication. Then there’s the estimated reduction from deduplication in the third column, with the last column representing the actual physical storage used with Data Domain. As you can see, at the end of three months, you’ve protected the equivalent of 15.4 TB of backups but only used 706 GB of disk—a 21-times reduction. Or viewed differently, the three- month deduplicated total is 50 percent less than the single week total using non-deduplicated storage. This dramatic impact shows you why so many companies have redesigned their backup around disk-optimized storage. 2. Woche 14. April 3,4 TB 10-fach 326 GB 3. Woche 21. April 4,6 TB 13-fach 364 GB 1. Monat 28. April 5,8 TB 14-fach 402 GB 2. Monat 31. Mai 10,6 TB 19-fach 554 GB 3. Monat 30. Juni 15,4 TB 21-fach 706 GB GESAMT 15,4 TB 21-fach 706 GB

12 Datenintegrität: Data Invulnerability Architecture
End-to-End-Datenüberprüfung Prüfsumme Deduplizierung, Schreiben auf Festplatten Überprüfung Deduplizierung Lokale Komprimierung RAID Dateisystem Prüfsumme generieren Daten überprüfen Überprüfung der Metadatenintegrität des Dateisystems Überprüfung der Integrität von Anwenderdaten Überprüfung der Stripe-Integrität Erneute Prüfsumme und Vergleich Dateisystem mit automatischer Fehlerkorrektur Bereinigung Abgelaufene Daten Defragmentierung Überprüfung Another important differentiator for Data Domain systems is the Data Invulnerability Architecture. Data Domain Data Invulnerability Architecture lays out the industry's best defense against data integrity issues by providing unprecedented levels of data protection, data verification, and self- healing capabilities that are unavailable in conventional disk or tape systems. There are three key areas of data integrity protection described on this slide: First is end-to-end data verification at backup time. As illustrated by the graphic at the right, end-to-end verification means reading data after it is written and comparing it to what was sent to disk, proving that it is reachable through the file system to disk and that the data is not corrupted. Specifically, when the Data Domain Operating System receives a write request from backup software, it computes a checksum over the data. After analyzing the data for redundancy, it stores the new data segments and all of the checksums. After all the data has been written to disk, Data Domain Operating System verifies that it can read the entire file from the disk platter and through the Data Domain file system, and that the checksums of the data read back match the checksums of the written data. This confirms the data is correct and recoverable from every level of the system. If there are problems anywhere along the way—for example, if a bit has flipped on a disk drive—it will be caught. Since most restores happen within a day or two of backups, systems that verify/correct data integrity slowly over time will be too late for most recoveries. Second is a self-healing file system. Data Domain systems actively re-verify the integrity of all data every week in an ongoing background process. This scrub process will find and repair defects on the disk before they can become a problem. In addition, real-time error detection ensures that all data returned to the user during a restore is correct. On every read from disk, the system first verifies that the block read from disk is the block expected. It then uses the checksum to verify the integrity of the data. If any issue is found, the Data Domain Operating System will self-heal and correct the data error. In addition to data verification and self-healing, there are a collection of other capabilities. Data Domain with RAID 6 provides double disk failure protection; NVRAM enables fast, safe restart; and snapshots provide point-in-time file system recoverability. Backups are the data store of last resort. Data Domain Data Invulnerability Architecture provides extra levels of data integrity protection to detect faults and repair them to ensure backup data or recovery is not at risk. Sonstiges RAID 6 NVRAM Snapshots End-to-End-Datenüberprüfung

13 Netzwerkeffiziente Replikation für echte Disaster Recovery
Senkung der WAN-Kosten; Verbesserung der Service-Level-Agreements Flexible Replikation 1:n n:1 Bidirektional System zu System Kaskadiert 1 bis 5 % DB Data Domain-System Privat Once the data is stored in a Data Domain system, there are a variety of replication options to move the compressed deduplicated changes to a secondary site or a tertiary site for restore in multiple locations for disaster recovery. This can be done in a number of ways. There is a very high-performance, whole system, volume-replication approach. In addition, the most popular is a directory or tape pool-oriented approach that lets you select a part of the file system, or a virtual tape library or tape pool, and only replicate that. So a single system could be used as both a backup target and a replica for another Data Domain system. This graphic shows a number of smaller sites all replicated into one hub site. In those cases the dialogue between those systems asks the hub whether or not it has a given segment of data yet. If it doesn’t, then it sends the data. If the destination system does have the data already, the source site doesn’t have to send the data again. In this scenario with multiple systems replicating to one, in a many-to-one configuration, there is cross-site deduplication, further reducing the WAN bandwidth required and the price. Datenarchivierung WAN Backup-Daten 1 bis 5 % Data Domain-System 1 bis 5 % Privat Data Domain DD890 Data Domain-System Ziel: Rechenzentrums-Hub Unterstützung Hunderter Remote-Standorte Quelle: Remote-Standorte Standortübergreifende Bandbreitenreduzierung um 95 bis 99 %

14 DD Boost-Software Verteilung von Teilen des Deduplizierungsprozesses auf Backup-Server oder Anwendungsclients Bis zu 50 % schnellere Backups Effizientere Ressourcenauslastung möglich Mit Anwendungssteuerung für Data Domain- Replikationsprozesse Unterstützung der meisten auf dem Markt erhältlichen nativen Dienstprogramme für Backup-Software in branchenführenden Datenbanken EMC Avamar und NetWorker Symantec NetBackup und Backup Exec EMC Greenplum und Oracle RMAN Quest vRanger DD Boost In the traditional backup world, backup software is backup software, and storage is storage. DD Boost software distributes part of the deduplication process out of the Data Domain system and onto the backup server or application clients. This makes the backup network more efficient, makes Data Domain systems 50 percent faster, and makes the whole aggregate system more manageable. It works across the entire Data Domain product line and supports the majority of the backup market and now it also supports native utilities in industry leading databases. Neu!

15 Zusätzliche Data Domain-Softwareoptionen
Data Domain Retention Lock Sichere Aufbewahrung von Archivdaten aus Dateien und E- Mails Einhaltung interner Compliance- und Governance-Anforderungen Data Domain Replicator Netzwerkeffizient und verschlüsselt Konsolidierung von bis zu 270 Remote- Standorten in einem einzigen System In addition to DD Boost, EMC offers four additional Data Domain software options that can enhance the value of a Data Domain system in your environment. Note to Presenter: Click now in Slide Show mode for animation. The first is DD Retention Lock software enables you to easily implement deduplication with file locking to satisfy IT governance and compliance standards including SEC 17a-4(f) for archive data. Next is DD Replicator software, which provides fast, network-efficient , encrypted replication for disaster recovery, remote office data protection, multi-site tape consolidation, and long-term offsite retention. DD Replicator asynchronously transfers only the compressed, deduplicated data over the WAN, making network-based replication cost-effective, fast, and reliable. In addition, you can replicate up to 270 remote sites into a single Data Domain system for consolidated protection of your distributed enterprise. Next, DD Virtual Tape Library software, which eliminates tape-related failures by enabling all Data Domain systems to emulate multiple tape devices over a Fibre Channel interface. This software option provides easy integration of deduplication storage in open systems and IBM i environments. Next is DD Extended Retention software, which enables long-term retention of backup data on the DD860 or DD990 with up to 65 PB of logical capacity. Finally, DD Encryption software protects backup and archive data stored on Data Domain systems with encryption that is performed inline—before the data is written to disk. Encrypting data at rest satisfies internal governance rules and compliance regulations and protects against theft or loss of a physical system. The combination of inline encryption and deduplication provides the most secure data-at-rest encryption solution available. Data Domain Virtual Tape Library Einfache Integration in Fibre Channel Unterstützung von offenen Systemen und IBM i Betriebsumgebungen Data Domain mit erweiterter Aufbewahrung Langfristige Aufbewahrung von Backup-Daten Logische Kapazität von bis zu 65 PB Data Domain Encryption Inline-Data-at-Rest- Verschlüsselung Schutz gegen Diebstahl oder vor Verlust eines physischen Systems

16 Inline-Deduplizierungssysteme mit der branchenweit höchsten Skalierung
Data Domain-Softwareoptionen Großunternehmen DD Boost DD Encryption DD mit erweiterter Aufbewahrung DD Replicator DD Retention Lock DD Virtual Tape Library Mittelständisches Unternehmen Kleinuntern./ROBO Here’s a look at the latest Data Domain product family including the new DD990. The capabilities previously available in a DD Archiver are now only available with the ‘DD Extended Retention software option’ on two platforms – as you can see the capacity supported for the DD860 and DD990 now includes a line dedicated to DD Extended Retention. DD160 DD620 DD640 DD670 DD860 DD890 DD990 Geschwindigkeit (DD Boost) 1,1 TB/Std. 2,4 TB/Std. 3,4 TB/Std. 5,4 TB/Std. 9,8 TB/Std. 14,7 TB/Std. 31,0 TB/Std. Geschwindigkeit (sonstige) 667 GB/Std. 2,3 TB/Std. 3,6 TB/Std. 5,1 TB/Std. 8,1 TB/Std. 15,0 TB/Std. Logische Kapazität 40–195 TB 83–415 TB 0,32–1,6 PB 0,6–2,7 PB 1,4–7,1 PB 5,7–28,5 PB1 2,9–14,2 PB 5,7–28,5 PB 13– 65 PB1 Nutzbare Kapazität Bis zu 3,98 TB Bis zu 8,3 TB Bis zu 32,2 TB Bis zu 55,9 TB Bis zu 142 TB Bis zu 570 TB1 Bis zu 285 TB Bis zu 570 TB Bis zu 1,3 PB1 1 Mit Softwareoption für DD mit erweiterter Aufbewahrung

17 Bewertungskriterien für Deduplizierungsspeicher
The next section will focus on deduplication storage evaluation criteria.

18 Methodik: Inline- im Vergleich zu nachgeordneter Deduplizierung
Keine Auswirkungen auf andere Aktivitäten Vorhersagbar Einfacher INLINE- Deduplizierung vor Speicherung Deduplizierung NACHGEORDNETE Deduplizierung nach Speicherung Je mehr Prozesse, desto mehr Ressourcenkonflikte Kopieren auf Band: Zu langsam für Streaming auf Band Recovery: Service-Level-Agreement-Vorhersehbarkeit Replikation: Schlechte Disaster-Recovery-Zeiten Deduplizierung: Bei Überlappung mit Backup oder Wiederherstellung Mehr Administrationsaufwand zur Bewältigung dieser Probleme Deduplizierung Store Dreifacher Festplattenzugriff auf gemeinsam genutzten Speicher One of the most conventional alternatives to the Data Domain inline deduplication storage system approach (shown on the left) is by using a methodology known as post-process (shown on the right). In the post-process architecture, data is stored to a disk before deduplication. Then after it is stored, it is read back internally, deduplicated, and written again to a different area. Although this approach may sound appealing because it seems as if it would allow for faster backups and the use of less resources, it actually creates two problems: First, a lot more disk is needed to store the multiple pools of data, and for speed, because most of the other vendor’s deduplication approaches are spindle-bound. Because of this, there’s typically a factor of three or four more disks in a post-process configuration than you’ll see in a Data Domain deployment. Second, it’s just a lot simpler to use an inline approach. If data is all filtered before it’s stored to disk, then it’s just like a regular storage system—it just writes data; it just reads data. There’s no separate administration involved in managing multiple pools—some with deduplication, some with regular storage—and managing the conditions between them. Less administration in the storage system is always better.

19 Performance: CPU-orientiert im Vergleich zu spindelgebunden
Durchsatz MB/s 50 6.000 Anzahl der Festplattenspindeln 100 150 200 Data Domain Fibre Channel SATA Die meisten Deduplizierungs- anbieter Verbesserung seit 2004: Durchsatz: ca. 200-fach Kapazität: ca. 450-fach This slide shows another way to look at the virtues of being CPU-centric. As mentioned before, most of the deduplication competitors for backup targets are spindle-bound or disk-bound. It takes so many disk seeks to look up the information to tell whether data has been stored before or not, and to sort out and then minimize the data, that it takes a lot of disk drives or faster disk drives to get the job done. This slide shows what’s happened in our competitive environment as a result. If they’re using SATA disk drives, most deduplication storage vendors tend to need three or four times as many drives as a Data Domain system to store the same amount of deduplicated data. In some cases, for example, IBM’s ProtecTIER, storage systems use Fibre Channel drives instead of SATA. This can decrease the seek time, but it comes at a significantly higher cost. Data Domain systems, by being CPU-centric and minimizing disk usage to only what is required to store the actual data, end up having a smaller footprint. This can look like a weakness, but it’s actually a strength. By keeping the costs down, the Data Domain system is much better structured to compete on a comparison, for example, with a tape library, and its cost per gigabyte.

20 Was spricht für Data Domain?
Weniger Festplatten und geringerer Managementaufwand CPU-orientierte Deduplizierung Inline-Deduplizierung Einfach, ausgereift und flexibel Einfache, ausgereifte Appliance Beliebige Fabric-, Software-, Backup- und Archivierungsanwendungen Stabilität und Disaster Recovery Speicher für den Notfall Schnelle Time-to-Disaster Recovery (DR)-Bereitschaft Standortübergreifende Global Compression Rechenzentrum oder Remote-Standort Why Data Domain? To summarize, it starts from economics. There’s less disk to resource and less to manage. The CPU-centric deduplication approach of SISL Scaling Architecture allows the system to be simpler to manage as well as easier to provision and green. In addition, Data Domain is more mature and flexible than most of its competitors. Data Domain has been sold longer, and all the problems that most of EMC’s competitors are just starting to discover have been fixed. It works as advertised, and that alone is highly differentiated in this particular category. Finally, because of their resilience and replication flexibility, Data Domain systems not only work as advertised but work reliably.

21 Thank you.

22 Data Domain-Infrastruktur und -Ökosystem
Unterstützung einer Vielzahl von Workloads und Datentypen Backup Archivierung Midrange und Mainframe VMware Microsoft Microsoft SharePoint Oracle SAP NAS, SAN, DAS IBM i Primärer Speicher EMC DLm1000 OPTIONAL SLIDE This diagram illustrates the Data Domain infrastructure and ecosystem, including (Note to Presenter: Click now in Slide Show mode for animation.) traditional backup applications as well as (Note to Presenter: Click now in Slide Show mode for animation.) archiving applications, which are both sending data to a single Data Domain system for consolidated protection. Note to Presenter: Click now in Slide Show mode for animation. In addition, Data Domain systems offer simultaneous support for open systems and mainframe environments with EMC DLm1000. Data Domain systems easily integrate with existing infrastructures and can be used seamlessly with a variety of data movers and application workloads. By consolidating to a common disk- based target, you can avoid creating disparate islands of data and storage. A single Data Domain system can be used for backup and recovery, protection of enterprise applications (Oracle, Microsoft Exchange, VMware, and others), archiving, and online reference storage. Using new a DD VTL software feature, Data Domain systems can now provide the fastest backup throughput available for IBM i environments, with up to 8.1 TB/hr throughput. Data Domain systems support simultaneous use with other data access protocols to provide consolidated protection of IBM i and open systems environments. Additionally, DD Replicator can be used in all environments to provide network-efficient replication for offsite disaster recovery. Backup-Anwendungen Archivierungsanwendungen EMC F5 Networks Symantec CommVault EMC Symantec CommVault CA HP Vizioncore IBM Atempo BakBone Disaster Recovery Netzwerk Replikation über WAN

23 Bereitschaft des Unternehmens zur Wiederherstellbarkeit am Disaster-Recovery-Standort
Data Domain- Inline- Replikation mit Deduplizierung DR-Bereitschaft Replizierung während des Backups „Adaptive“ nachgeordnete Replikation mit Deduplizierung Backup in Cache Backup-Zeiten 1,7-mal länger als Data Domain DR-Bereitschaft Unter 50 % der Aufnahmegeschwindigkeit bei der Deduplizierung und Replizierung – doppelt so lang bei unkomprimierten Daten und fester Bandbreite OPTIONAL SLIDE A consequence of a post-process approach, rather than Data Domain’s inline approach, is a delay in completing replication. In a Data Domain inline deduplication process, as the data is stored it’s deduplicated—once it hits disk and there’s a logical consistency point it can start replicating, so the data can be “DR-ready” at a remote site very quickly, reasonably shortly after the backup is complete. There are two different ways that vendors use post-process styles of storage to do replication, but they always end up with slower effects when it comes to a restore site. In an adapted process, data is stored on disk and then after a small amount of data is collected it will start deduplicating. After it finishes deduplicating, it can start to replicate that first image; meanwhile, it’s still backing up other data. The consequence is that the functions overlap, so a single controller is busy both managing the I/O-intensive process of backup and the typically disk-bound process of deduplication, followed by the additional work for replication. In a scheduled post-process approach, all of the backup data is stored, followed by all of it being deduplicated and replicated. This will go a little bit faster than when it overlaps, but it still takes longer. In some implementations the data is also not compressed when it replicates, so that can take additional time as well to send over the same bandwidth. In all of these non-inline approaches it’s just going to take longer, and if your recovery point objective on the disaster recovery side is to be able to restore data as soon as possible after backup, you will always be successful with an inline approach, especially a Data Domain approach. The worst case example (on the bottom of the slide) is backing up to a disk storage system like a virtual tape library and copying to tape, tracking that, and then recalling tapes. „Geplante“ nachgeordnete Replikation mit Deduplizierung Backup in Cache Backup-Zeiten 1,1-mal länger als Data Domain DR-Bereitschaft Unter 50 % der Aufnahmegeschwindigkeit bei der Deduplizierung und Replizierung – doppelt so lang bei unkomprimierten Daten und fester Bandbreite VTL/Band/ Transport Backup auf VTL ? Kopieren auf Band Transport zum Lager Transport vom Lager Bänder abrufen

24 EMC Global Services Strategieentwicklung Design Implementierung
Verwalten BERATUNG TECHNOLOGIEBEREITSTELLUNG MANAGED SERVICES WARTUNG UND SUPPORT SCHULUNGEN Der Strategic Observation- Service legt eine Roadmap/Vision fest, mit der sich Ihre Recovery-Ziele realisieren lassen. Der Operational Readiness- Service empfiehlt eine Referenzarchitektur, die Deduplizierungstechnologien von EMC nutzt und Ihre Implementierung optimiert. Best-Practice-Methoden von der Architektur bis hin zur Integration Assessment, Design/ Implementierung, Operational Assurance, Integritätsprüfung, Datenmigration Residency Services stellen erfahrenen Service- Experten vor Ort oder an Remote-Standorten bewährte Best Practices und technologisches Know-how zur Verfügung. Remote Managed Services bieten rund um die Uhr kosteneffektive, ITIL- basierte, intelligente Remote-Überwachung sowie betriebliches Infrastrukturmanagement. Umfassende globale, proaktive und präventive Verfahren und entsprechender Lösungs- Support Open Storage Technology- Schulung, EMC technologiespezifische Lernpfade, EMC Proven Professional-Zertifizierung OPTIONAL SLIDE EMC Global Services offers end-to-end services capabilities for the physical and virtualized environment. Its portfolio includes: Consulting Services to help you get more value from your information and business impact from your information infrastructure investments. Technology Deployment Services to help you plan, design, implement, and optimize your EMC information infrastructure. Managed Services to fill knowledge, expertise, and resource gaps. Maintenance and Support Services to ensure business continuity and a highly available data environment. Training and Certification to help you develop skills to implement and manage complex IT infrastructures and increasing storage demand.

25 Argumente für EMC Global Services
Geringere Kosten Deutlich niedrigere Implementierungs- und Betriebsausgaben Vermeidung interner Ressourcenengpässe mit weniger Aufwand Schutz von Investitionen in EMC Lösungen Schnellere Wertschöpfung Reduzierung der Bereitstellungszeit Schnellerer ROI für neue Projekte Vereinfachung der Compliance bei gleichzeitigem Schutz kritischer Unternehmensdaten Risikominimierung und bessere Ergebnisse Konfiguration der Lösung ganz nach Ihren Anforderungen Verbesserung von Service-Levels; Reduzierung von Managementkosten EMC Best Practices und unübertroffenes Expertenwissen über die Produkte = herausragendes Kundenerlebnis Weniger Unterbrechungen bei gleichzeitigem Profitieren von den Funktionen und Vorteilen der neuesten Produkte und Lösungen von EMC OPTIONAL SLIDE EMC Global Services is a large component of the your total EMC experience. EMC Global Services allows you to… Save money by: Significantly lowering your implementation and operating expenditure costs Filling internal resource gaps for less Protecting your investments in EMC solutions Accelerate time to value by: Reducing deployment time Accelerating return on investment for new projects Easing the burden of compliance while protecting critical business information Mitigate risk and get better results by: Configuring the solution to meet your requirements Improving your service levels and reducing your management costs Using EMC best practices and unmatched product expertise = superior customer experience Reducing disruption while taking advantage of the features and benefits of the latest EMC products and solutions


Ähnliche Präsentationen