5. Network access control: After the shakeout
6. 10 Gigabit Ethernet: A switch in time
7. Virtualization: Beyond the server farm
8. Cloud computing: Proceed with caution
9. Web 2.0: Learn to live with it
##CONTINUE##
5. Network access control: After the shakeout
Network access control has been a hot, fun topic for the past couple of years.
Epic standards battles pitted Cisco against Microsoft, each having its own terminology and approaches. And who could forget the Trusted Computing Group, which, with its own architecture, acted as a wild card?
Then there was the horde of third-party vendors offering to handle a company's NAC needs if it didn't want to wait for Cisco and Microsoft to deliver on their promises.
Last year was a turning point for NAC, however. The standards battles appear to have been resolved, and everything looks like it's falling into place. Customers apparently decided to wait for Microsoft to deliver its NAC products - and that left many third-party vendors out in the cold. A lot of them went under, including Caymas Systems and Lockdown Networks.
And because Network Access Protection (NAP, Microsoft's version of NAC) comes with Vista and Windows Server 2008, deciding to go with Microsoft has become a no-brainer for many customers. NAP represents a clear choice, rather than a technology that requires extensive research, RFPs, product tests and evaluations, and so forth.
NAP even proved itself in a recent product evaluation Forrester Research performed to determine which NAC tools would solve real-world deployment problems. Microsoft came in first, followed by Cisco and Juniper Networks.
This year the questions for customers will be where do we deploy NAC, and how many NAC features do we turn on? Most customers today are using NAC just to control guest access. That's important, but the technology can do more. On the pre-admission side, it can scan user devices, determine whether they are clear of viruses, check to see if patches have been updated and quarantine the device if security conditions aren't met. On the post-admission side, it can make sure that a clean machine remains that way, and that users access only those parts of the network to which they have authorization.
These important functions are ones that every IT exec should be implementing.
6. 10 Gigabit Ethernet: A switch in timeIn 2001, when 10 Gigabit Ethernet switches were introduced, the average per-port cost was $39,000, according to IDC.
Today, a 10G Ethernet port costs less than $4,000, which makes 10G Ethernet switches affordable for the enterprise wiring closet or data center.
With ongoing data-center server consolidation, not to mention the needs of service providers and high-volume Web sites, standards groups and vendors are hard at work on 40 Gigabit Ethernet and even 100 Gigabit Ethernet. For now, however, 10G Ethernet is the industry standard, and customers are flocking to 10G Ethernet switches. Switch-based 10G Ethernet port shipments grew by 140% in 2007, Infonetics Research reports. Worldwide revenue for 10G Ethernet services and equipment will hit nearly $9.5 billion by year-end, a 30% increase from last year, the firm predicts.
If your Fast Ethernet boxes are becoming stressed, this might be the time to move to 10G Ethernet. Per-port prices are coming down and feature sets are going up. A recent Network World test of seven 10G Ethernet switches found these products offer not only powerful packet-pushing capabilities but also 802.1X authentication, enhanced multicast support, protection against denial-of-service attacks and IPv6 support. The test demonstrated that these switches have extensive management and security features, which are just as important as how many packets they can move per second.
By now, you've most likely implemented some level of x86 server virtualization. So, the question of the moment is this: Does data-center virtualization on x86 boxes represent the end of your virtualization efforts or just the beginning?
What about storage virtualization? What about desktop virtualization? What about application virtualization? What about virtualizing all your data-center hardware including Unix boxes and mainframes?
Those are the key, long-term virtualization questions facing IT executives. Once you've started down the road to decoupling the underlying technology infrastructure from the services you're providing to the business, doesn't it make sense to extend that strategy across the enterprise?
If you're inclined to agree, the next logical step would be storage virtualization, because you're dealing with another technology residing in the data center. The advantages of creating a virtual storage pool include lower-cost data migration, easier storage-resource management, common replication services and the ability to maximize and extend your storage resources.
Client virtualization, which comes in a variety of options, also offers real benefits. In the hosted virtual-desktop setup, applications are hosted on a server and users work on thin-client machines. This would be ideal, for example, in a call center.
In another version of desktop virtualization, one physical machine is virtualized. Here, separate business and personal zones could be created on mobile workers' laptops for security and compliance.
Or multiple operating systems could be run on a single PC. This scenario would apply to engineers, for example, who might be running a specific Unix or Linux-based technical application but using Windows for e-mail and other basic applications.
Most companies today are in the first stage of virtualization, says Gartner analyst George Weiss. This means they're consolidating and virtualizing servers as cost-cutting measures, typically with a single vendor.
The next phase would be using virtualization technology for the dynamic allocation of resources across servers. And the final phase, which won't occur for several more years, is heterogeneous virtualization, the ability to move workloads dynamically across hardware platforms.
As we arrive at 2009, cloud computing is the technology creating the most buzz. Cloud technology is in its infancy, however, and enterprises would be wise to limit their efforts to small, targeted projects until the technology matures and vendors address a variety of potentially deal-breaking problems.
First off, let's define cloud computing. Gartner says it is "a style of computing whose massively scalable and elastic, IT-related capabilities are provided 'as a service' to external customers using Internet technologies."
The two most commonly cited examples of cloud offerings come from Amazon.com and Google, both of which basically rent their data-center resources to outside customers.
For example, Amazon's Elastic Computer Cloud (EC2) lets customers rent virtual-machine instances and run their applications on Amazon's hardware. Other services under the EC2 umbrella include storage and databases in the cloud. Amazon uses Xen for virtualization and offers customers a choice of Linux, Solaris or Windows operating systems.
The pitch is that customers can take advantage of Amazon's expertise in running large data centers, that customers pay only for the compute and storage resources they use, and that Amazon can scale up or down easily, depending on the demand.
That's the most basic level of cloud computing - infrastructure in the cloud. In this scenario, the customer is aware of and makes choices concerning the infrastructure itself.
The next level is cloud computing as a Web development platform. The best example is Google's App Engine, a place where Web application developers can upload code (as long as it's written in Python) and let Google's infrastructure take care of deploying the application and allocating compute resources.
The third level is running enterprise applications in the cloud. A cloud vendor could host an enterprise application and take responsibility for that application's availability and performance. Gartner predicts e-mail will become one of the first enterprise applications that move to the cloud.
How is that different from software-as-a-service (SaaS)? Without getting too tangled up in semantics, SaaS typically refers to a specific vendor - Salesforce.com, for example - offering its application to multiple customers in a hosted model. Theoretically, a SaaS vendor could use the cloud infrastructure to host its applications. Also theoretically, a cloud provider could host anybody's application.
That brings us to the ultimate cloud scenario, in which these "private" clouds owned by such companies as Amazon and Google melt into one giant, public cloud that contains all the user's data and applications and is accessible anytime on any device.
That's a long way off, however. In addition, the potential roadblocks are many. They include issues of licensing, privacy, security, compliance and network monitoring. A final potential stumbling block is that enterprise applications tend to be customized and intertwined, with one system feeding into or reporting back to another. That makes it pretty tough to pluck out an application and run it in the cloud without affecting every related application.
So for now, keep an eye on the cloud, but keep your feet firmly planted on the ground.
9. Web 2.0: Learn to live with itThe Web 2.0 phenomenon is unstoppable. Employees are turning in droves to blogs, wikis, mash-ups, social networking, crowdsourcing and other variations on the Web 2.0 theme. A recent Yankee Group survey found that 86% of non-IT workers are using at least one consumer Web 2.0 tool at work. As younger workers enter the enterprise workforce, access to Web 2.0 technologies will become only more of a given.
Watch a slideshow on 12 tips for safe social networking.
The challenge for IT executives is how best to harness Web 2.0 technologies in a way that's secure; serves such basic enterprise functions as collaboration; and adds to worker productivity, revenue generation and overall business benefits.
The possibilities are endless. A Gartner list of Web 2.0 applications includes answer marketplaces, collaborative product and service design, community-driven self-service, crowdsourcing, idea engines and prediction markets.
Many employees are using such social-networking sites as LinkedIn, MySpace or Twitter to communicate with peers and customers. In addition, a growing number of vendors aim to help companies set up and manage enterprise-grade Web 2.0 applications. For example, WorkLight offers Java-based software that will help authenticate, encrypt, store and manage Web 2.0 applications (see "10 start-ups to watch in '09"). Face Connector (formerly Faceforce) is a mash-up that brings Facebook profile and friend information seamlessly into Salesforce CRM. Socialtext 3.0 provides social networking, wikis and customizable home pages for the enterprise.
That's just the tip of the iceberg. Key is identifying an application that fits with the culture of your company, then making it available and watching as the community takes off.-----------------------------
BY Neal Weinberg
Source:Network World
0 comments:
Post a Comment