How to improve the status of cloud computing data protection

Today, most data protection solutions use public cloud platforms to reduce the cost of local data protection infrastructure. In order to save costs, suppliers usually store backup datasets in low-cost object storage, such as AWS's simple storage service (S3). When storing protected datasets, these vendors typically store data in a proprietary format, which reduces accessibility and reusability. In order to improve the state of cloud computing data protection, cloud computing providers need to focus on providing immediate access for workload recovery and reusability of other use cases. (source: enterprise network d1net)

How to improve the status of cloud computing data protection

Cloud computing utilization in data protection many vendors only use cloud computing to store exact copies of backup datasets, which effectively makes the public cloud an alternative to tape, but does not shrink the local storage infrastructure. Others may use public cloud storage as a tier and migrate old backups locally, reducing the backup storage infrastructure. Some people try to use public cloud computing outside of cloud storage to create disaster recovery as a service (draas) products, but they find that there is almost as much disaster recovery in the cloud as in a customer owned site.

Backup format problem

The biggest challenge is that most data protection vendors do not store data in their native application format. To improve local backup performance, they package the data into larger packets before saving it to the hard disk. These proprietary formats persist as vendors move data to the cloud platform. The problem of storing data in the cloud in a non-native application format means that data needs to be extracted before taking advantage of cloud computing services or disaster recovery, which increases the recovery time objective (RTO).

Object storage problem

Object storage is a very cost-effective way to store data. It has built-in extension and persistence capabilities, making it an ideal choice for long-term data retention. However, object storage is generally not suitable as a storage for production applications. If vendors store data on S3 buckets, their customers must copy or restore the data to another tier in the cloud computing infrastructure before actually using it. For example, moving datasets from S3 to Amazon's elastic block storage (EBS) can take more than an hour per terabyte. Add time to extract data from proprietary formats, and the time to restore data to EBS can grow exponentially. According to the survey, AWS customers said it took more than 24 hours to recover the 6tb database.

Return question

In most cases, if customers can successfully recover in the cloud, they will want to return their operations to the original data center. The problem is that when an organization is in a disaster recovery state, they are changing and creating data and need to transfer all changed data and new data back to the primary data center. Even if the local data center owns most of the data, most data protection applications still need to restore the entire dataset. Cloud computing is more complex because of its slow transmission speed and export cost.

Actifio 10C for advanced cloud data protection

Actifio's model is different from traditional data protection solutions. First, it stores data in a native application format, making almost all processes or services instantly accessible. Second, it allows organizations to choose how much to invest in their local infrastructure. They can have full copies, working sets, or no local backup storage infrastructure. Actifio can immediately mount the data storage of virtual machine from local or cloud computing object storage. The data is then returned from any location to the local storage facility. Because Actifio restores data in the background, the virtual machine can be accessed immediately.

The company has added reverse change block tracking in its latest version of Actifio 10C, so that it only restores the data needed for recovery. If any local backup cache survives a disaster, it is not retransmitted. This streaming feature eliminates the "return" problem, and reverse change block tracking significantly reduces recovery time and cloud computing export costs.

Actifio 10C also supports multiple backup targets. Customers can back up to local object storage or NAS and cloud platforms at the same time. The new feature in Actifio 10C is to support Dell EMC's data domain storage systems through its ddboost protocol. In Actifio 10C, customers can also replicate data to multiple public clouds at the same time to achieve disaster preparedness or create cloud platforms for different use cases. Again, because it is native, these services can access it directly. Today, Actifio supports Amazon AWS, Google compute platform, and IBM cloud with one click disaster recovery choreography. Actifio stores data in native format, so it can be used by cloud native services such as AWS redshift or Google bigquery for analysis and processing.

Actifio 10C also solves the problem of moving data from cloud computing object storage to cloud based storage infrastructure. It is implemented by starting SSD hard disk cache between object storage and block based storage. With this capability, Actifio can immediately gain high-performance data access from the main cloud platform without having to wait for all data to migrate to block based storage. It is more cost-effective to use cloud platform for large-scale testing and analysis.

An important new feature in Actifio 10C is disaster recovery coordination, which enables activio customers to create and automate disaster recovery plans. They can preset the network by setting the recovery order and executing pre recovery and post recovery scripts. The result is a simple one click recovery to a local or cloud platform. Disaster recovery coordination encourages it to invest time in disaster planning. It makes it easier to update plans and test plans. Actifio choreographer will also automatically instantiate more sky devices to ensure that large-scale recovery work can be performed quickly.

Organizations can also use disaster recovery choreography for cloud migration. This feature allows continuous seeding of sandboxes during testing, and then performs a final switch when ready. It can also inject these workloads, rather than virtual machines, into containers to further help organizations modernize their data center operations.

After the migration is complete, customers can continue to use Actifio to protect the cloud native version of the workload. All the same features apply, including the ability to replicate data to another cloud platform. They can use an agent-free approach and take advantage of cloud snapshots, or they can use Actifio's native solution, which creates more consistent copies of data and faster recovery.

Actifio 10C is a major upgrade. Its function enables organizations to recover quickly anywhere. It also helps customers reduce costs by increasing the performance of object storage to serve many use cases. This version of reverse change block tracking allows companies to reduce local recovery time while reducing export costs. Its disaster recovery choreography enables it professionals to keep pace with changes in the data center. Disaster recovery planning is becoming an art, but disaster recovery choreography allows it to be rediscovered. Finally, it helps with digital transformation. With Actifio 10C, customers can not only migrate their workloads to the cloud platform, but also protect them when they exist.

Back to list
© 2023, Shenzhen Beimeikang Biotechnology Co.,Ltd. All Rights Reserved https://beian.miit.gov.cn/"粤ICP备17156197号