SY0-601 Architecture and Design Study Guide for the CompTIA Security+
Page 1
General Information
CompTIA Security+ questions about architecture and design occupy about 21% of the entire test. These questions assess how well you understand the physical components of security. Questions on this test sometimes start with a scenario, including about 12% of the questions related to architecture and design.
Security in an Enterprise Environment
To ensure a company’s infrastructure has been hardened, it is important to understand the potential risk it may endure in the future. Some of these potential risks include natural disasters and physical attacks. It is important that you are able to explain the concepts covered in this section.
Configuration Management
Configuration management is very important to cyber security professionals. This allows a baseline to be established for the various operating systems (OSs) within the company. Configuration management includes configuration standards, tools, and documentation, all of which are essential to creating a secure network environment.
Diagrams
Diagrams are viewable images of the network, showing the architecture of the network and the data flow. These aid in visualizing how the network is composed and the flow of information within the network.
Baseline Configuration
A baseline configuration includes the standard minimum configuration requirements for different systems within the network, such as for different OSs or devices. This provides a starting point for all connected systems and devices and can be changed to meet the further requirements of specific groups or teams.
Standard Naming Conventions
Standard naming conventions are a set of enterprise-specific naming protocols that can be used to organize and identify systems on a network. For example, the location abbreviation can be used to identify different endpoint locations in the network, such as NashF763, where Nash stands for Nashville, F stands for finance group, and 763 is the employee number. As well as ease of identification, standard naming conventions are useful in obscuring the purpose of a system from threat actors and making filtering and sorting systems or devices easier.
Internet Protocol (IP) Schema
Employing a standardized IP schema involves segmenting systems based on criteria such as the purpose of location. This allows for easier implementation of appropriate IP addresses to specific devices and systems as well as reduces the likelihood of collision or running out of available IP addresses. Using standardized IP schema can also assist in finding unwanted or unneeded assigned IP addresses.
Data Sovereignty
Data sovereignty is the principle that any data collected, stored, or processed has to align with the legal restrictions of the location in which it is held or processed. For example, data housed in the cloud can bounce through multiple data centers in multiple countries, which means that the data must meet the legal requirements for each data center it is routed through.
Data Protection
Data protection is vital to security and uses various techniques to protect that data no matter where on the network or in the system it is. Data protection involves data in all its states: at rest, in motion, and in processing.
Data Loss Prevention (DLP)
DLP systems or programs are used to help maintain the security of data within the system or network. These systems are designed to monitor all data within the system to ensure that the policies and procedures set forth are followed. They also allow monitoring for potentially sensitive data that may be at risk or not properly protected. DLP systems may also act proactively to prevent sensitive data from leaving the system by blocking transmissions and alerting the security team to potential incidents.
Masking
Masking is a cybersecurity technique for de-identification that obfuscates or anonymizes potentially sensitive data from observation or viewing by unprivileged entities. Masking can be as simple as replacing a number with an asterisk, such as when a password is entered, or it can involve replacing the entire data set with a preset code that can be unmasked by authorized users.
Encryption
Encryption involves applying complex mathematical algorithms to data to protect it while at rest or in transit. Once an encryption algorithm is applied, the only way to view the unencrypted data is to apply the corresponding key. This way, if the data is stolen by a threat actor in transit, the data is completely unintelligible and rendered useless unless the attacker has the decryption key.
At Rest
Data that is “at rest” is data that is currently stored at a permanent location awaiting retrieval or use. Data at rest locations can include anything from a Universal Serial Bus (USB) drive, a hard drive, a cloud service, or anywhere else data can be stored.
In Transit/Motion
Data “in motion” or “in transit” is data that is actively being sent or transmitted between two locations or points, such as from one computer to another or from one city to another. When data is in motion, it is in the transmission process.
In Processing
Data that is “in processing” is being used by a workstation or a server and is currently being stored in the device’s temporary memory files. For example, if a human resources employee is accessing the employee database to review pay data, the employee database is being stored on the employee’s desktop in its RAM memory for quicker retrieval.
Tokenization
Tokenization uses unique identifiers to replace private data values. These unique identifiers are stored in a lookup table. For example, a social security number could be replaced with a random nine-digit number. Then when the social security number is needed by a system, it can be converted as needed. This allows the data to stay secure.
Rights Management
Rights management provides protection for data or work. Rights management, including both information rights management (IRM), which protects documents and works containing sensitive information, and digital rights management (DRM), which protects mass-produced media, provides protection on how data can be used and who can use it in order to protect sensitive and copyrighted data.
Geographical Considerations
There are numerous geographical considerations when it comes to data storage, including off-site storage, distance/location selection, legal implications, and data sovereignty. At least one copy should be stored in a separate geographic location that will protect the data from a natural disaster. The general rule of thumb is an organization should keep a copy of data at least 90 miles away to prevent it from being affected by geographically related incidents, such as natural disasters, or mechanical failure, such as a power grid outage.
Response and Recovery Controls
Response controls are protocols designed to ensure the most efficient response if an incident occurs in order to maintain availability. Response controls can include a plan for non-persistence, which is the ability to cycle a system on and off if needed, or reversion, which is the ability for a system to revert back to a known state when restarted. Recovery controls are protocols that are designed to return the system to normal operations, such as returning a system to a last known good state through the use of backup checkpoints or snapshots.
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) Inspection
SSL and TLS inspection is software designed to inspect all incoming and outgoing SSL or TLS traffic for potential threats or compliance issues. While both SSL and TLS are encryption protocols, they may still be vulnerable to threat actors, malware, or insider threats, such as employees sending out sensitive data via email or that is unencrypted.
Hashing
Hashing is a mathematical algorithm that is performed on a file to generate a unique number or hash. For example, many password protocols use a hash value in place of the actual password. Instead of storing the password in plaintext, the hash value is stored instead. When the correct password is inputted, it results in the correct hash value, verifying authentication. A hash value can also be used to ensure data integrity. A hash algorithm is applied to the entire data set or transmission. When the data is received at its destination, it is re-hashed to ensure the two hash values correspond. If the hash values are different, the data has been tampered with or altered in some way.
Application Programming Interface (API) Considerations
An API is the interface between clients, servers, applications, and operating systems that inform how they communicate with each other. There are methods to protect APIs, such as authentication, which can prevent unauthorized access, and authorization, which ensures each developer does not have root access, just access that is needed. Another method is transport-level security, which is used to secure the traffic that crosses over a network.
Site Resiliency
To ensure site resiliency, it’s important to have a backup that can be used to process transactions when the primary site is down or has failed. There are three recovery sites: hot, cold, and warm.
hot site—This is a site that is up all the time, meaning 24/7. When the primary site fails or is under maintenance, this site takes over. This is the best choice for a company that has high availability requirements.
cold site—A cold site only requires power and network connectivity. This may be a leased building or a data center that can be used when needed. This site does not contain any of the hardware needed for the site, it is merely an empty shell that is ready to be utilized. This is the least expensive site.
warm site—A warm site is in between cold and hot. The site has all the hardware and connections in place; the only thing that is needed is the data. This can be obtained from a backup.
Deception and Disruption
Deception and disruption are tactics that may be used to secure an enterprise environment by capturing attackers or disrupting an attack. Deception employs the use of misleading environments to lure the attacker in order to catch information about the attacker or attack. Disruption, as the name implies, aims to disrupt an attack before it can take hold.
honeypots—This is a decoy server that is designed to look like a gold mine to a threat actor. The server may have few to no security controls in place to allow threat actor access. Honeypots are used to trick the threat actor and keep them away from the corporate network while collecting information about the threat.
honeyfiles—This is a decoy file that is used to lure a threat actor. If the honeyfile is accessed or transmitted, it indicates a potential breach of the system.
honeynets—This is a decoy network that is isolated from the corporate network. Honeynets can be used to simulate the corporate network and are composed of multiple honeypots in order to further lure the attacker into breaching the decoy network. Once in the honeynet, data is collected on the threat actor that can shed light on the tools and methods used to breach the network.
fake telemetry—Fake telemetry is similar to honeyfiles in that it contains decoy or faked telemetry data that can be used to entice attackers while simultaneously capturing data on and about the attack.
Domain Name System (DNS) sinkhole—A DNS sinkhole is when a DNS server is configured to return false results to specific DNS queries. This allows for the queries to be routed to harmless addresses as well as providing information on potential attack vectors or systems.
All Study Guides for the CompTIA Security+ are now available as downloadable PDFs