Friday, August 19, 2016

Exam 70-532 Developing Microsoft Azure Solutions

Published: September 1, 2014
Languages: English, Japanese
Audiences: Developers
Technology: Microsoft Visual Studio, Microsoft Azure
Credit toward certification: MCP, Microsoft Azure Developer Specialist

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area on the exam. The higher the percentage, the more questions you are likely to see on that content area on the exam. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

If you have other questions or feedback about Microsoft Certification exams or about the certification program, registration, or promotions, please contact your Regional Service Center.

The Microsoft Azure environment is constantly evolving. To maximize relevance, this exam is regularly updated to reflect both deprecated and new technologies and processes. As of March 10, 2016, this exam reflects an update. To learn more about these changes and how they affect the skills measured, please download and review the Exam 70-532 changes document.

Note To ensure that they are aware of the latest updates, it is recommended that all individuals registering for this exam review this page several times before their scheduled exam.

Design and implement Web Apps (15‒20%)
Deploy Web Apps
Define deployment slots; roll back deployments; implement pre and post deployment actions; create, configure and deploy a package; create App Service plans; migrate Web Apps between App Service plans; create a Web App within an App Service plan
Configure Web Apps
Define and use app settings, connection strings, handlers, and virtual directories; configure certificates and custom domains; configure SSL bindings and runtime configurations; manage Web Apps by using the API, Azure PowerShell, and Xplat-CLI
Configure diagnostics, monitoring, and analytics
Retrieve diagnostics data, view streaming logs, configure endpoint monitoring, configure alerts, configure diagnostics, use remote debugging, monitor Web App resources
Implement web jobs
Write web jobs using the SDK, package and deploy web jobs, schedule web jobs
Configure Web Apps for scale and resilience
Configure auto-scale using built-in and custom schedules, configure by metric, change the size of an instance, configure Traffic Manager
Design and implement applications for scale and resilience
Select a pattern, implement transient fault handling for services, respond to throttling, disable Application Request Routing (ARR) affinity

Preparation resources
How to Deploy an Azure Web Site
Getting Started with the Azure WebJobs SDK
Cloud Design Patterns: Prescriptive Architecture Guidance for Cloud Applications

Create and manage virtual machines (20‒25%)
Deploy workloads on Azure Virtual Machines (VMs)
Identify workloads that can and cannot be deployed, run workloads including Microsoft and Linux, create VMs
Create and manage a VM image or virtual hard disk
Create specialized and reusable images, prepare images using SysPrep and Windows Agent (Linux), copy images between storage accounts and subscriptions, upload VMs
Perform configuration management
Automate configuration management by using PowerShell Desired State Configuration and VM Agent (custom script extensions); configure VMs using a configuration management tool, such as Puppet or Chef; enable remote debugging
Configure VM networking
Configure reserved IP addresses, Network Security Groups (NSG), DNS at the virtual network level, load balancing endpoints, HTTP and TCP health probes, public IPs, firewall rules, direct server return, and keep-alive
Scale VMs
Scale up and scale down VM sizes, configure auto-scale and availability sets
Design and implement VM storage
Configure disk caching, plan for storage capacity, configure shared storage using Azure File service, configure geo-replication
Monitor VMs
Configure endpoint monitoring, configure alerts, configure diagnostic and monitoring storage location

Preparation resources
Copy Blob
Load Balancing for Azure Infrastructure Services
How to Monitor Cloud Services

Design and implement cloud services (20‒25%)
Design and develop a cloud service
Install SDKs, install emulators, develop a web role or worker role, design and implement resiliency including transient fault handling, develop startup tasks
Configure cloud services and roles
Configure HTTPS endpoint and upload an SSL certificate, and instance count and size; configure network access rules, local storage, multiple Web Apps, custom domains, and dedicated and co-located caching; scale up and scale down role sizes; configure auto-scale
Deploy a cloud service
Upgrade an automatic, manual, or simultaneous deployment; VIP swap a deployment; package a deployment; implement continuous deployment from Visual Studio Online (VSO); implement runtime configuration changes using the portal; configure regions and affinity groups
Monitor and debug a cloud service
Configure diagnostics using the SDK or configuration file, profile resource consumption, enable remote debugging, establish a connection using Remote Desktop CmdLets in Azure PowerShell, debug using IntelliTrace or the emulator

Preparation resources
Continuous delivery to Azure using Visual Studio Online
Configuring SSL for an application in Azure
Windows Azure SDK for .NET - 2.2

Design and implement a storage strategy (20‒25%)
Implement Azure Storage blobs and Azure files
Read data, change data, set metadata on a container, store data using block and page blobs, stream data using blobs, access blobs securely, implement async blob copy, configure Content Delivery Network (CDN), design blob hierarchies, configure custom domains, scale blob storage, implement Azure Premium storage
Implement Azure storage tables
Implement CRUD with and without transactions, design and manage partitions, query using OData; scale tables and partitions
Implement Azure storage queues
Add and process messages, retrieve a batch of messages, scale queues
Manage access
Generate shared access signatures, including client renewal and data validation; create stored access policies; regenerate storage account keys; configure and use Cross-Origin Resource Sharing (CORS)
Monitor storage
Set retention policies and logging levels, analyze logs
Implement SQL databases
Choose the appropriate database tier and performance level, configure and perform point in time recovery, enable geo-replication, import and export data and schema, scale SQL databases

Preparation resources
Azure Cloud Service Tutorial: ASP.NET MVC Web Role, Worker Role, and Azure Storage Tables, Queues, and Blobs - 1 of 5
Shared Access Signatures, Part 1: Understanding the SAS Model
How to Monitor a Storage Account

Manage application and network services (15‒20%)
Integrate an app with Azure Active Directory
Develop apps that use WS-federation, OAuth, and SAML-P endpoints; query the directory using graph API
Configure a virtual network
Deploy a VM into a virtual network, deploy a cloud service into a virtual network
Modify network configuration
Modify a subnet, import and export network configuration
Design and implement a communication strategy
Develop messaging solutions using service bus queues, topics, relays, and notification hubs; create service bus namespaces and choose a tier; scale service bus
Monitor communication
Monitor service bus queues, topics, relays, and notification hubs
Implement caching
Implement Redis caching, migrate solutions from Azure Cache Service to use Redis caching

Preparation resources

Azure AD Graph API
Notification Hubs Monitoring and Telemetry


QUESTION 1
You are deploying the web-based solution in the West Europe region.
You need to copy the repository of existing works that the plagiarism detection service uses. You must achieve this goal by using the least amount of time.
What should you do?

A. Copy the files from the source file share to a local hard disk. Ship the hard disk to the West Europe data center by using the Azure Import/Export service.
B. Create an Azure virtual network to connect to the West Europe region. Then use Robocopy to copy the files from the current region to the West Europe region.
C. Provide access to the blobs by using the Microsoft Azure Content Delivery Network (CDN). Modify the plagiarism detection service so that the files from the repository are loaded from the CDN.
D. Use the Asynchronous Blob Copy API to copy the blobs from the source storage account to a storage account in the West Europe region.

Answer: D

Explanation:
Ref: http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-asynchronous-cross-account-copy-blob.aspx

QUESTION 2
You update the portion of the website that contains biographical information about students.
You need to provide data for testing the updates to the website.
Which approach should you use?

A. Use SQL Server data synchronization.
B. Use the Active Geo-Replication feature of Azure SQL Database.
C. Use SQL Replication.
D. Use the Geo-Replication feature of Azure Storage.

Answer: A


QUESTION 3
The website does not receive alerts quickly enough.
You need to resolve the issue.
What should you do?

A. Enable automatic scaling for the website.
B. Manually Increase the instance count for the worker role.
C. Increase the amount of swap memory for the VM instance.
D. Set the monitoring level to Verbose for the worker role.
E. Enable automatic scaling for the worker role.

Answer: B

Thursday, March 3, 2016

650-159 ICA Cisco IronPort Cloud Associate

Exam Number 650-159
Duration 90 minutes (25-35 questions)

The 650-159 ICA Cisco IronPort Cloud Associate exam tests your knowledge of the following:
What ScanSafe does and how it works
The various deployment methods at a technical level so you can recommend the most suitable deployment to customers according to their needs and existing infrastructure
Basic administration, how to manage a web-filtering policy, and how to run reports on web usage within the ScanCenter GUI

Exam Topics
The following topics are general guidelines for the content likely to be included on the exam. However, other related topics may also appear on any specific delivery of the exam. In order to better reflect the contents of the exam and for clarity purposes, the guidelines below may change at any time without notice.
How to Sell ScanSafe QLMs
Web Filtering
Advanced WIRe
ScanCenter Lab Exercise
Deployment Options
Outbreak Intelligence

QUESTION 1
What important consideration do you need to be aware of when using a Connector?

A. Multiple DHCP servers
B. Multiple DNS servers
C. Multiple break-out points

Answer: C

Explanation:


QUESTION 2
How can AnyConnect be bypassed by a user when installed locked-down?

A. When locked-down, AnyConnect can be bypassed by a user by changing the Browser Proxy settings
B. When locked-down, AnyConnect can be bypassed by a user if they know the admin password
C. AnyConnect cannot ever be bypassed by a user when installed locked-down

Answer: C

Explanation:


QUESTION 3
How are different time zones supported by WIRe?

A. Each entry is converted to UTC as it is stored, so you can select any time zone in the GUI when
searching and see the times according to the user's local time
B. All entries are recorded only in their local time zone, so you need to calculate the time offset
when searching for data of users in different time zones
C. There is no support for different time zones in WIRe

Answer: A

Explanation:


QUESTION 4
If a customer wants roaming protection for laptops with Windows 7 64 bit, and is not using Cisco's
VPN, which one of the following would be the best solution:

A. Anywhere*
B. AnyConnect Web Security standalone client
C. Anywhere* or AnyConnect are both suitable
D. This scenario cannot be supported by Anywhere* or AnyConnect

Answer: B

Explanation:


QUESTION 5
How many reports can be included in a Scheduled Report?

A. 20
B. 75
C. Only1
D. Only 1, but this can be a Composite Report

Answer: D

Explanation:


Tuesday, March 10, 2015

The Big Question Rises How To Become Microsoft, Cisco, ComTIA Certified

The big question rises how to become the Microsoft certified , All Microsoft certifications are acquired by simply taking a series of exams. If you can self-study for said exams, and then pass them, then you can acquire the certification for the mere cost of the exam (and maybe whatever self-study materials you purchase).

You'll also need, at minimum (in addition to the MCTS), the CompTIA A+, Network+ and Security+ certs; as well as the Cisco CCNA cert.

Microsoft Certified Technology Specialist (MCTS) - This is the basic entry point of Microsoft Certifications. You only need to pass a single certification test to be considered an MCTS and there are numerous different courses and certifications that would grant you this after passing one. If you are shooting for some of the higher certifications that will be discussed below, then you'll get this on your way there.

Microsoft Certified Professional Developer (MCPD) - This certification was Microsoft's previous "Developer Certification" meaning that this was the highest certification that was offered that consisted strictly of development-related material. Receiving it involved passing four exams within specific areas (based on the focus of your certification). You can find the complete list of courses and paths required for the MCPD here.

Microsoft Certified Solutions Developer (MCSD) - This is Microsoft's most recent "Developer Certification" which will replace the MCPD Certification (which is being deprecated / retired in July of 2013). The MCSD focuses within three major areas of very recent Microsoft development technologies and would likely be the best to persue if you wanted to focus on current and emerging skills that will be relevant in the coming years. You can find the complete list of courses and paths required for the MCSD here.

The Microsoft Certifications that you listed are basically all of the major ones within the realm of development. I'll cover each of the major ones and what they are :

Most people, however, take some kind of course. Some colleges -- especially career and some community colleges -- offer such courses (though usually they're non-credit). Other providers of such courses are private... some of them Microsoft Certified vendors of one type or another, who offer the courses in such settings as sitting around a conference table in their offices. Still others specialize in Microsoft certification training, and so have nice classrooms set up in their offices.

There are also some online (and other forms of distance learning) courses to help prepare for the exams.

The cost of taking classes to prepare can vary wildly. Some are actually free (or very nearly so), while others can cost hundreds of dollars. It all just depends on the provider.

And here's a Google search of MCTS training resources (which can be mind-numbing in their sheer numbers and types, so be careful what you choose):

There are some pretty good, yet relatively inexpensive, ways to get vendor certificate training. Be careful not to sign-up for something expensive and involved when something cheaper -- like subscribing to an "all the certificates you care to study for one flat rate" web site -- would, in addition to purchasing a study guide or two at a bookstore, likely be better.

If you want a career in IT, then you need to have both an accredited degree in same (preferably a bachelors over an associates), and also a variety of IT certifications. The MCTS is but one that you will need.

You should probably also get the Microsoft MCSE and/or MCSA. The ICS CISSP. And the ITIL.

There are others, but if you have those, you'll be evidencing a broad range of IT expertise that will be useful, generally. Then, in addition, if the particular IT job in which you end-up requires additional specialist certification, then you can get that, too (hopefully at the expense of your employer who requires it of you).

Then, whenever (if ever) you're interested in a masters in IT, here's something really cool of which you should be aware...

There's a big (and fully-accredited, fully-legitimate) university in Australia which has partnered with Microsoft and several other vendors to structure distance learning degrees which include various certifications; and in which degrees, considerable amounts of credit may be earned simply by acquiring said certifications. It's WAY cool.

One can, for example, get up to half of the credit toward a Masters degree in information technology by simply getting an MCSE (though the exams which make it up must be certain ones which correspond with the university's courses). I've always said that if one were going to get an MCSE, first consult the web site of this university and make sure that one takes the specific MCSE exams that this school requires so that if ever one later decided to enter said school's masters program, one will have already earned up to half its degree's credits by simply having the MCSE under his/her belt. Is that cool, or what?

I wouldn't rely on them over experience (which is far and away the most valuable asset out there) but they are worth pursuing especially if you don't feel like you have enough experience and need to demonstrate that you have the necessary skills to land a position as a developer.

If you are going to pursue a certification, I would recommend going after the MCSD (Web Applications Track) as it is a very recent certification that focuses on several emerging technologies that will still be very relevant (if not more-so) in the coming years. You'll pick up the MCTS along the way and then you'll have both of those under your belt. MCPD would be very difficult to achieve based on the short time constraints (passing four quite difficult tests within just a few months is feasible, but I don't believe that it is worth it since it will be "retired" soon after).

No job experience at all is necessary for any of the Microsoft Certifications, you can take them at any time as long as you feel confident enough with the materials of the specific exam you should be fine. The tests are quite difficult by most standards and typically cover large amounts of material, but with what it sounds like a good bit of time to study and prepare you should be fine.

Certifications, in addition to degrees, are so important in the IT field, now, that one may almost no longer get a job in that field without both. The certifications, though, are so important that one who has a little IT experience can get a pretty good job even without a degree as long as he has all the right certs. But don't do that. Definitely get the degree... and not merely an associates. Get the bachelors in IT; and make sure it's from a "regionally" accredited school.

Then get the certs I mentioned (being mindful, if you think you'll ever get an IT masters, to take the specific exams that that Strut masters program requires so that you'll have already earned up to half the credit just from the certs).

If you already have two years of experience in working in the .NET environment, a certification isn't going to guarantee that you will get employed, a salary increase or any other bonuses for achieving the honor. However, it can help supplement your resume by indicating that you are familiar with specific technologies enough to apply them in real-world applications to solve problems.

If your ready for career change and looking for Microsoft MCTS Training, Microsoft MCITP Training or any other Microsoft Certification preparation get the best online training from Certkingdom.com they offer all Microsoft, Cisco, Comptia certification exams training in just one Unlimited Life Time Access Pack, included self study training kits including, Q&A, Study Guides, Testing Engines, Videos, Audio, Preparation Labs for over 2000+ exams, save your money on boot camps, training institutes, It's also save your traveling and time. All training materials are "Guaranteed" to pass your exams and get you certified on the fist attempt, due to best training they become no1 site 2012.


Wednesday, February 4, 2015

Why Google wants to replace Gmail

Gmail represents a dying class of products that, like Google Reader, puts control in the hands of users, not signal-harvesting algorithms.

I'm predicting that Google will end Gmail within the next five years. The company hasn't announced such a move -- nor would it.

But whether we like it or not, and whether even Google knows it or not, Gmail is doomed.

What is email, actually?
Email was created to serve as a "dumb pipe." In mobile network parlance, a "dumb pipe" is when a carrier exists to simply transfer bits to and from the user, without the ability to add services and applications or serve as a "smart" gatekeeper between what the user sees and doesn't see.

Carriers resist becoming "dumb pipes" because there's no money in it. A pipe is a faceless commodity, valued only by reliability and speed. In such a market, margins sink to zero or below zero, and it becomes a horrible business to be in.

"Dumb pipes" are exactly what users want. They want the carriers to provide fast, reliable, cheap mobile data connectivity. Then, they want to get their apps, services and social products from, you know, the Internet.

Email is the "dumb pipe" version of communication technology, which is why it remains popular. The idea behind email is that it's an unmediated communications medium. You send a message to someone. They get the message.

When people send you messages, they stack up in your in-box in reverse-chronological order, with the most recent ones on top.

Compare this with, say, Facebook, where you post a status update to your friends, and some tiny minority of them get it. Or, you send a message to someone on Facebook and the social network drops it into their "Other" folder, which hardly anyone ever checks.

Of course, email isn't entirely unmediated. Spammers ruined that. We rely on Google's "mediation" in determining what's spam and what isn't.

But still, at its core, email is by its very nature an unmediated communications medium, a "dumb pipe." And that's why people like email.
Why email is a problem for Google

You'll notice that Google has made repeated attempts to replace "dumb pipe" Gmail with something smarter. They tried Google Wave. That didn't work out.

They hoped people would use Google+ as a replacement for email. That didn't work, either.

They added prioritization. Then they added tabs, separating important messages from less important ones via separate containers labeled by default "Primary," "Promotions," "Social Messages," "Updates" and "Forums." That was vaguely popular with some users and ignored by others. Plus, it was a weak form of mediation -- merely reshuffling what's already there, but not inviting a fundamentally different way to use email.

This week, Google introduced an invitation-only service called Inbox. Another attempt by the company to mediate your dumb email pipe, Inbox is an alternative interface to your Gmail account, rather than something that requires starting over with a new account.

Instead of tabs, Inbox groups together and labels and color-codes messages according to categories.

One key feature of Inbox is that it performs searches based on the content of your messages and augments your inbox with that additional information. One way to look at this is that, instead of grabbing extraneous relevant data based on the contents of your Gmail messages and slotting it into Google Now, it shows you those Google Now cards immediately, right there in your in-box.

Inbox identifies addresses, phone numbers and items (such as purchases and flights) that have additional information on the other side of a link, then makes those links live so you can take quick action on them.

You can also do mailbox-like "snoozing" to have messages go away and return at some future time.

You can also "pin" messages so they stick around, rather than being buried in the in-box avalanche.

Inbox has many other features.

The bottom line is that it's a more radical mediation between the communication you have with other people and with the companies that provide goods, services and content to you.

The positive spin on this is that it brings way more power and intelligence to your email in-box.

The negative spin is that it takes something user-controlled, predictable, clear and linear and takes control away from the user, making email unpredictable, unclear and nonlinear.

That users will judge this and future mediated alternatives to email and label them either good or bad is irrelevant.

The fact is that Google, and companies like Google, hate unmediated anything.

The reason is that Google is in the algorithm business, using user-activity "signals" to customize and personalize the online experience and the ads that are served up as a result of those signals.

Google exists to mediate the unmediated. That's what it does.

That's what the company's search tool does: It mediates our relationship with the Internet.

That's why Google killed Google Reader, for example. Subscribing to an RSS feed and having an RSS reader deliver 100% of what the user signed up for in an orderly, linear and predictable and reliable fashion is a pointless business for Google.

It's also why I believe Google will kill Gmail as soon as it comes up with a mediated alternative everyone loves. Of course, Google may offer an antiquated "Gmail view" as a semi-obscure alternative to the default "Inbox"-like mediated experience.

But the bottom line is that dumb-pipe email is unmediated, and therefore it's a business that Google wants to get out of as soon as it can.

Say goodbye to the unmediated world of RSS, email and manual Web surfing. It was nice while it lasted. But there's just no money in it.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, May 8, 2013

Cisco gets tough: Details ruggedized switches for harsh environments

In small form factor, CGS-1000 switch is low-latency and designed for utilities

Cisco, which wants to expand its clout into the industrial networks used by power-generation utilities to support the electric grid, today announced an expansion of its "smart grid" portfolio with ruggedized and low-latency switches and other equipment intended for use in electric-power distribution systems.

"Utilities often have systems unique to them," said Jenny Gomez, marketing manager in Cisco's Connected Energy Business Unit that oversees the architecting of a wide range of equipment for networking and physical security to modernize utility networks while supporting their legacy systems that in some cases aren't being swapped out. As part of this effort, Cisco today introduced a number of products, including the CGS-1000 switch in a small form factor for use in a location such as an electric substation that's part of a complex, critical power-distribution system.

"The CGS-1000 is built to withstand harsh environments," pointed out Joe Ammirato, senior director of product management at Cisco's Connected Energy Business Unit. The ruggedized low-latency switch is intended to be able to work with utility sensors that often have serial interfaces, not Ethernet ones, Ammirato noted.

With experience gained over a few years in working with some utilities in North America and Europe, Cisco has been designing specialized switching, network access and security products for both data center and substation equipment under its so-called GridBlocks architecture. The push is part of Cisco's much-ballyhooed "Internet of Everything" initiative that's taking the company further into areas outside of traditional business IT networking.

The GridBlocks architecture supposes that Cisco will be able to provide equipment aimed not only at the substation tier, but also the system-control level where wide-area networks connect substations with each other and with control systems as well as SCADA and other event messaging.

The architecture also addresses utility data centers and control centers, plus devices and systems associated with residences and third-party elements. The Cisco GridBlocks architecture lists several other tiers for interchange and trans-national grid monitoring and tie-ins as well. In many cases, Cisco isn't coming up with entirely new products for the utilities but adapting its network segmentation, switching, security and management platforms to try and suit specialized needs utilities have.

Cisco is recommending MPLS as a core technology for utility use, especially in substations, says Ammirato, who points out that utility networks associated with the grid may not necessarily be IP-based at all today or they connect in a style not seen in modern IT business networks. For utilities intending to modernize, the challenge is in swapping out what can reasonably be changed while bridging older legacy systems that for one reason or another will remain related to the transmission grid.


Best CCNA Training and CCNA Certification and more Cisco exams log in to examkingdom.com




Saturday, January 5, 2013

The 4 leading causes of data security breakdowns

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

A string of high-profile data breaches in 2012, from LinkedIn to Global Payments, have kept enterprise data security in the limelight. But most organizations still tend to be reactive and focus on firefighting when it comes to data security, rather than implementing a more effective long-term strategy. Let's examine the four most common pitfalls of this short-sighted approach.

* Lack of standards. Ad hoc security is often based upon multiple "standards" and solutions across disparate functional IT groups. For example, data encryption is often poorly implemented by IT professionals who don't really understand data security requirements. Some organizations are adopting emerging industry standards for encryption like the Key Management Interoperability Protocol (KMIP). However, it is still immature and thus no panacea. This overall lack of IT security standards leads to higher management costs, redundant processes, and greater risk of a data breach.

YEAR IN REVIEW: Worst security snafus of 2012

* No central control. In a similar fashion, individual security and encryption tools provide their own management consoles for administration, monitoring/auditing and key management. Each ad hoc solution needs to be configured separately and will provide different levels of functionality, sophistication and certification. This creates an operations quagmire. It also introduces varying degrees of risk depending upon each tool and how it is implemented. Of equal concern, CISOs have no central way to assess, monitor, address and report on how effectively these disparate security measures are working. That's because each individual security tool has its own policies, provisioning and management system. This translates into escalating costs and complexity.

* Disconnected management systems. Multiple security technologies each have to be provisioned, managed and monitored separately. Again, with no centralized management or policies, administration must be performed within each tool. The sheer number of management operations increases the risk of configuration errors, which can lead to a security breach or unrecoverable critical file.

* IT misalignment. Ad hoc security tools are often deployed in a manner where functional IT groups have access to and control over these systems. A good example is encryption keys. When multiple encryption solutions are implemented across an enterprise, encryption keys are exposed to numerous IT staff members. This violates a key information security best practice, namely, separation of duties. Since encryption keys are exposed to a wide group of individuals, this greatly increase the risk of an insider attack.
Three-step strategy to centralize security

There are several steps organizations can take to address the shortcomings associated with ad hoc security implementations.

The first is to consolidate core IT management disciplines. These include policy management, configuration management and reporting/auditing. All of these management activities should be controllable from one central location with actual execution occurring throughout the enterprise. In the reverse direction, all management information should flow back to the centralized management repository for storage, analysis, and reporting purposes.

Second, implement distributed policy enforcement. Centralized security policies must be enforced on heterogeneous systems distributed throughout the enterprise. To accomplish this, central management consoles must be able to distribute agents, configure individual systems, securely manage them and log all activities.

Third, deploy tiered administration. This enables enterprises to set and enforce both enterprise and departmental policies and allows separation of duties where security administrators, not functional IT staff, maintain management control over their security domains. For example, a database administrator at a financial services firm can be granted the power to maintain an Oracle database, but not rights to access regulated financial data. This approach protects the confidentiality and integrity of sensitive data by limiting access to security management based on job requirements.

Deploying a centralized enterprise security management strategy will require financial and IT resource investments. However, it will enable organizations to control and share information while managing risk. Most importantly, it will help prevent data security breakdowns that lead to breaches and costly public disclosures.


Best CCNA Training and CCNA Certification and more Cisco exams log in to examkingdom.com


Thursday, November 15, 2012

Office 365 email conks out twice within a week

Office 365 email conks out twice within a week
Antivirus issue, infrastructure failures to blame

Microsoft's Office 365 service has suffered two email outages within a week of each other that affected some customers in North and South America that stemmed from different causes but ended in the same result: failed email delivery.

The first outage Nov. 8 stemmed from an overwhelmed antivirus engine and the subsequent backup that caused the service degradation. The second on Nov. 13 resulted from the failure of unspecified network elements, routine maintenance and increased load that combined to degrade service, according to the Office 365 blog posted by Rajesh Jha, the corporate vice president of Microsoft's Office division.

HELP: 11 (FREE!) Microsoft tools to make life easier

NEWS: AT&T offers Microsoft Office 365

He didn't say how many customers were affected or where they were located other than somewhere on the two continents. Both outages affected just Office 365 Exchange Online mail services.

Affected customers are entitled to a service credit. Jha apologizes and promises a post mortem on the outages as well as an update on how the Office 365 service level agreement was affected.

The Nov. 8 incident started when an antivirus engine bogged down as it processed emails that the engine determined carried a particular virus. That delay processing emails led to retries that further bottlenecked email flow including legitimate emails, he says.

The issue was resolved by intercepting the tainted messages and quarantining them directly.

To head off similar problems down the line, the company has set a lower threshold for diverting problem emails and implementing faster remediation tools. It is also adding unspecified safeguards that automate remediation of this type of problem, Jha says.

The second incident Nov. 13 started with some scheduled maintenance that required shifting some of the load out of those data centers undergoing maintenance. During this work unspecified network elements failed but sent no alerts of their failure, he says. And finally the entire infrastructure was handling more traffic from new customers, all of which resulted in some customers being unable to access email services.

Traffic for affected users was shifted to healthy data centers while the issues were dealt with.

Jha says the company is in the midst of increasing capacity and is automating how equipment failures are handled to speed up recovery time.

In addition, the company is reviewing its processes to head off future outages.

"As I've said before," Jha blogs, "all of us in the Office 365 team and at Microsoft appreciate the serious responsibility we have as a service provider to you, and we know that any issue with the service is a disruption to your business - that's not acceptable. I want to assure you that we are investing the time and resources required to ensure we are living up to your - and our own - expectations for a quality service experience every day."


MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 3000+
Exams with Life Time Access Membership at http://www.actualkey.com