Big data analysis can help identify new ways to address certain network conditions

If there is a behavior to look for, it is the one where the measuring elements indicate a state of the network that fails to generate problem report, even when this state faithfully reproduces a problem period. Here, the goal is to use the metrics to identify what is likely to have mitigated the expected problem; an identification that could optimize your analysis of the root cause, or suggest other ways to remedy the offending condition.

 

Another component to consider is how resources are affected by a network, application, or server event, or a change in user traffic load. When a major change occurs in one of these areas, the network must respond in a predictable manner. For example, a significant change in application traffic generally increase a visible increase of u time response, and among other problems, a higher rate of dropped packets.

 

These behaviors occur without a major change in traffic and therefore suggest that resources are overloaded. Similarly, they may reveal that the network is oversized when a significant change in traffic occurs without an equivalent increase in response time or packet loss. In this case, some decreases in capacity are tolerated, opening the way to securing a lower operating budget.

 

Focus only on exploitable conditions

 

One last tip: in terms of data, avoid “doing your business” based on problems. Some administrators will dig Big Data reports for unusual patterns of behavior, even when there is no evidence that these behaviors can be associated with any of the processes or tasks outlined above. You could simply discover that users are asking more about their apps and the network at specific times of the day. (A generally easy thing to do: just take a look around the office!)

 

The key to successful exploitation of Big Data analysis in network applications is to focus on the issues, not the data points. Information about the state of the network – whether homogeneous and normal, or aberrant and problematic – is only relevant once classified. Unclassifiable states are difficult to transform into action items. It is therefore difficult to justify the investment in analysis time required for their detection and management. Administering a network is not easy. And to prove itself a viable tool, Big Data must facilitate this work and not make it more difficult. In order not to go wrong, focus on the exploitable data.

 

 

Manual migration against approach with scripts

 

The majority of data migration projects are managed manually, usually using Excel spreadsheets. This approach requires processes, and most importantly, discipline. Indeed, an excel sheet of data migration must be continuously updated by several administrators.

 

Scripts are a common alternative. The advantage of scripts is that they can be customized based on the environment, storage, servers, applications, vendors, and infrastructure. Nevertheless, the bad point: these scripts are rarely documented, tested, corrected or updated. They usually need to be rewritten for each SAN data migration project. The scripts always seem like a good idea … until they are implemented.

 

These data migration technologies save time

 

It is possible to implement four different technologies in order to limit the manual tasks of a data migration. While each has trade-offs, they can all dramatically reduce workloads and errors:

 

Use programmable physical storage appliances or virtual SDS appliances (VSAs).

 

Make sure the new SAN is a newer version or version of the old SAN and enable data replication between them.

 

Use a scalable SAN system.

 

Use VMware vSphere Storage vMotion.

 

SDS appliances are generally considered to simplify the migration of SAN data. This solution migrates storage systems behind the SDS appliance or VSA without touching the servers or infrastructure in front of the appliance or virtualization system. The disadvantage: these appliances add very significant costs that increase the total cost of ownership. In addition, the complexity is increased by having to manage all SDS or VSA appliances, hypervisor resources for the VSA and back-end storage arrays.

 

In addition, this adds complexity when refreshing the SDS appliance – although this is typically handled by the sequential migration of active-active appliances or scale-out appliances. This is not a problem for VSAs.

 

Staying with the same storage vendor and using the latest version of SAN being migrated is a relatively common but not necessarily perfect approach. This requires a replication license on both systems and all servers connected to the storage must be reconnected to the new storage. This is usually a solid workaround.

 

The use of a scalable SAN has grown in popularity. With these systems, processing and capacity can be transparently upgraded in the data center. After the data migration, the old node is simply removed. We clean and then repeat the operation until all the nodes are upgraded. Applications, servers, virtual machines (VMs) and connected containers are not disrupted by this approach and this eliminates the need to migrate data. Just add a new node and remove the old one. The data is quickly rebuilt on the new remaining nodes.

 

VMware Storage vMotion has become a popular alternative. The VMware solution migrates data from each of the SAN bay LUNs assigned to that VMware server to the LUNs in the new SAN array. VM virtual LUNs do not change, masking the actual physical change in the storage arrays. The disadvantage of this approach is that it only works for virtual machines running in a VMware infrastructure. The solution also does not provide SAN remediation or VMware host remediation. In addition, it does not move data from LUNs that are not assigned to that particular VMware host. There is also no test or validation of data migration.

 

Of course, these four alternatives all involve a commitment to their supplier. The first requires a commitment to the SDS provider, the second to the storage array provider, the third to the scalable storage provider, and the fourth to VMware.

 

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *