Quantcast
Channel: Symantec Connect - Products - Articles
Viewing all 805 articles
Browse latest View live

Securing CCS with SSL Certificates

$
0
0

Introduction

This article will be about configuring SSL encrypted communication between CCS Application server and Production/Reporting server and also enabling SSL on CCS Web portal. Symantec CCS Planning and Deployment guide recommends to configure SSL communication between Application server and standalone database servers where the Production/Reporting DBs are hosted.

Database SSL configuration

SSL certificates

Right, you need one in order to use it to secure CCS. If you already have it, skip to next section, here I’ll show how to generate self-signed certificate purely for demonstration purpose, it should be noted that this should NOT be used in production instances at any time.

Install OpenSSL Win32

In order to produce self-signed certificate, we need software that is capable of producing such thing. Suitable is OpenSSL Win32. Download OpenSSL Win32 library (full install, no light) from following link (install latest version): http://slproweb.com/products/Win32OpenSSL.html

OpenSSL can be installed on any server, does not need to be installed on server where certificate will be placed, this software is just needed to generate CSR.

Generate self-signed certificate

To generate self-signed certificate, navigate to C:\OpenSSL-Win32> and run command similar to this:
C:\OpenSSL-Win32>openssl.exe req -new -newkey rsa:2048 -keyout hostkey.pem -nodes -out hostcsr.pem
1.png

Note that the most important info is the Common Name (e.g. server FQDN), ensure that matches your SQL server FQDN other way it will not be accepted later in SQL config.

Now we proceed by signing the csr with key we just generated, creating self-signed certificate:

C:\OpenSSL-Win32>openssl x509 -req -in hostcsr.pem -signkey hostkey.pem -out srv01.crt

2.png

The last step is to export certificate and the private key into PKCS12 keystore that will be imported into MSSQL:

3.png

Importing certificate

After we have obtained SSL (self-signed) certificate, next step is to import it into SQL server certificate store.

To import certificate follow these steps:

  1. Start->Run-> type mmc
    4.png
  2. Click on File->Add/Remove snapin, locate Certificates and click on Add >
    5.png
  3. Select Computer account
    6.png
  4. And then Local Computer, and click Finish, and then OK
    7.png
  5. If there is already existing certificate, it will be under Personal->Certificates location, other way the Personal folder will be empty.
    8.png
  6. To load new certificate, right-click on Personal, navigate to All Tasks and click Import…
    9.png
  7. In the Welcome to Certificate Import Wizard, click Next, then Browse to the .p12 certificate file. If you do not see the file, ensure you selected .p12 file extension in browse dialog:
    10.png
  8. Click Next once you select the certificate
    11.png
  9. If there was password configured during certificate creation, enter it in this window, if no password was configured, leave the field empty. Do not select “Mark this key as exportable…”. Click Next.
    12.png
  10. Leave default certificate location and click Next.
    13.png
  11. Click finish at the Completing the certificate import wizard. If all was OK, you should see:
    14.png
  12. New certificate will be visible in the console alongside old ones (if any). Next step is to configure SQL to use the (new) certificate.
    15.png

Configuring SQL to use SSL certificate

To configure SQL to use SSL certificate, follow these steps:

  1. Start SQL Server Configuration Manager from All Programs->Microsoft SQL Server 2014->Configuration Tools
    16.png
  2. Under SQL Server Network Configuration, right click on Protocols for MSSQLSERVER and select Properties:
    17.png
  3. On the Certificates tab, select the appropriate certificate.
    18.png
    NOTE: in case you configure “Force Encryption”, connection between CCS Application Server and CCS Database will be secured regardless whether you enable “Use SSL” in CCS Console or not. This setting forces any client to use encrypted connection and if the client is not capable then it will fail. If you host other DBs on this SQL server and you have some legacy clients connecting that do not support encrypted connection, then leave this setting as “No” and select “Use SSL” in CCS Console. If your DB server is hosting CCS DB only, you can leave this setting as “Yes” and you’re done.
  4. Click OK. Warning will be shown
    19.png
  5. Restart the SQL Server service

Configure CCS to use SSL certificate

After we configured SQL to use the SSL certificate, last step is to enable "Use SSL" in CCS Console configuration.

  1. Open CCS Console
  2. Under Setting -> Secure Configuration, select Production/Reporting Database Connection
  3. Check the “Use SSL” check box
  4. Click on Update
  5. Perform steps 1-4 for Reporting Database Connection.
    20.png
    NOTE: if you receive message “Failed to update the connection string” ensure that your “client” has the SSL certificate root certificate in the Trusted root certificate store. This is less likely to happen in production since the certificate will be signed by Trusted CA, but in our self-signed certificate example, I had to add srv01 certificate to trusted root store on both the CCS Application server and CCS Database servers.

Webserver (IIS) SSL Configuration

As for the securing communication between CCS Application server and Database servers using SSL certificate explained in previous section, in this one we’ll discuss securing the CCS Web console with SSL certificate.

As in previous section, I’ll show usage of self-signed certificate for the demonstration purpose which should not be used for servers in production.

Install self-signed certificate for testing purpose

  1. Open Internet Information Service (IIS) Manager
  2. Navigate to Server Certificates under Web server name node
    21.png
  3. On the Actions menu on right side, click on “Create Self-singed certificate” link
    22.png
  4. Specify a friendly name, e.g. “CSS Web (test)” and click OK
  5. Certificate will appear under the Server Certificates
    23.png
  6. Navigate to Default Web Site and on the right menu click the Bindings... link
    24.png
  7. If there is no “https” type, click Add and select following:
    • Type: https
    • IP Address: leave default “All Unassigned”
    • Port: leave default “443”
    • SSL Certificate: select the appropriate certificate
    • Click OK
      25.png
  8. Click Close on Site Bindings window
    26.png
  9. Test the connection via browser – if you used self-signed certificate you’ll receive warning that there is issue with certificate and that connection is not secure which is normal since the self-signed certificate is not trusted by web browsers.

I hope this article was useful and you have solved your issue or learned a thing or two. If you find something that should be updated/corrected/added/removed from this article, feel free to contact me.

Best Regards,

Vladx


Install DLP Agent on MAC OS

$
0
0

You can install one DLP Agent to a Mac endpoint at a time, or you can use system management software (SMS) to install many DLP Agents automatically. Symantec recommends that you install one DLP Agent using the manual method before you install many DLP Agents using your SMS. Installing in this manner helps you troubleshoot potential issues and ensure that installing using your SMS goes smoothly.

Here are the graphics steps to install the DLP agent on MAC OS.

1. Log into DLP Enforce console, select 'System' --> 'Agents' --> 'Agent Packaging':

macdlpagent-001.png

2. Browse and select the DLP MAC agent installation package, input the Endpoint server IP address or hostname, input the tools password, then click 'Generate Installer Packages':

macdlpagent-002.png

3. Save the DLP MAC agent installer:

macdlpagent-003.png

4. Copy the installer to the MAC OS, and unzip it. There are 8 files inside the package, including the cert files:

macdlpagent-004.png

5. Launch Terminal, change the path into the unzipped agent package, run the following command to create the DLP MAC agent installation package:

./create_package

macdlpagent-005.png

6. Confirm the new package created successfully:

macdlpagent-006.png

7. The new package will be named AgentInstall_WithCertificates.pkg by default:

macdlpagent-007.png

8. Run the following command to install the DLP MAC agent:

sudo installer -pkg AgentInstall_WithCertificates.pkg -target /

macdlpagent-008.png

9. You will be needed to input the password of the OS:

macdlpagent-009.png

10. Confirm the installer run successfully:

macdlpagent-010.png

11. Log into DLP Enforce console, check the status of the DLP Agent on MAC OS:

macdlpagent-011.png

12. You can also check the process of DLP agent from Activity Monitor:

macdlpagent-012.png

macdlpagent-013.png

13. Configure policy and response rule on DLP Enforce, then copy a confidential file to USB disk, there will be a notification just as the Windows agent:

macdlpagent-014.png

Cloud Burst in a Rainshadow Region (Symantec + Bluecoat makes it possible)

$
0
0

There are many Rainshadow regions across the world, isn't it? In Asia, the Himalaya mountain range acts as the catalyst for the rain shadow effect over the Tibetan Plateau, Central Asia, and the Gobi Desert. The Japanese Alps create that same phenomenon over the Kanto Plain in the Tokyo region. The Arakan Mountains also have that effect over the central regions of Myanmar. In the Middle East, the Judean Hills have the same effect on the Dead Sea and the Judean Desert. The Zagros and Elburz Mountains in Iran create the same effect over the Dashti-Lut region. In South America, the Atacama Desert, Mendoza region, and Valle Central in Chile, the experiencing of a rain shadow effect therein is due to the presences of the Andes Mountains.

The role of 'tall Mountains casting shadow' here in Information Technology (IT) is been played by Hosting Solutions/Cloud vendors. There is tremendous shadow casted over locally managed infrastructures. The pressure of not being able to match the lowered costs offered by Cloud vendors eventually is resulting into every environment embracing the cloud more than ever before. If a competitor reduces operating cost by 30% by decommissioning all local hardware and moving into cloud, it means that they can now sell their product at a discounted price. Other competitors who stay rigid on their approach of not accepting the change and outsourcing technology would either loose market share or run with reduced/no profit margins. This is exactly what I call a rain-shadow situation introduced not by the Himalayas, Alps or the Elburz but the Cloud Providers the Amazon, Google, HP and IBM.

Eventually this dense shadow would/has force/d even one of the most busiest e-commerce businesses to adapt the cloud. The tunneled cloud based service providers, offer easy solution to the most critical issues of Business Continuity and Disaster Recovery which is an added advantage. Does this mean it would now rain in a rain-hungry (rain-shadow) belt after movement to IaaS Cloud. The flood of connections coming from the WAN would now stay on the WAN itself or maybe go to your provider's network now. So to our analogy, we accept the fact that cloud stays within cloud, rather water flows within the cloud and ultimately does not leave the cloud. To the most extent, may be piped from one cloud to another but stays there right at the top. There are no issues with this type of a system however the major issue is with sharing the data for inspection for various technologies like inspection of traffic vulnerabilities, IDS, IPS and most importantly Data Loss Prevention for e-gress traffic. Is this all under control now? Both no-rain or cloud bursting (flooding) rain are bad.

The point I'm trying to make is that we all need regulated rain. Regulated via switches and regulators. There has been remarkable amount of vacuum in this space for quite some time now but the union of Symantec + Bluecoat + Elastica, we're finally in a space which is the most ideal. Neither living in rain shadow nor getting overly flooded. Compliance scenario is improving with CASB and manual pointers now.

Cloud adoption has created new security and compliance issues. Enterprises are struggling to understand the data security and compliance impact of aggressive employee and organizational adoption of cloud applications while also trying to determine how to maintain data security and compliance with new data residency laws as their infrastructure moves to the cloud. This is where a Cloud Access Security Broker (CASB) comes into play. Startup CASB vendors can provide visibility into cloud application risk - largely based on proxy logs; those vendors, however, lack any control point for web and cloud traffic to implement policy control to mitigate the risk of shadow cloud. Moreover, they lack advanced threat protection that can protect from threats that may come from cloud application usage. Lastly, as these vendors require integration to an existing proxy to function, it makes sense that Symantec is a natural fit to perform these services natively instead.

Symantec CASB Solution Components:

  • Cloud Application Visibility and Risk Intelligence (“Audit”) allows organizations to discover and analyze cloud application usage within their organization for both sanctioned and non-sanctioned application usage. The Audit product delivers an understanding of who is using which applications, how much data is moving in and out, and where the risk lies across cloud application usage.
  •  Cloud Application Threat Protection and Data Controls (“CASB Gateway”) provides the ability to deliver in-line granular control over user interactions with cloud applications by recognizing usage and applying policies to maintain data security. The CASB Gateway offers data loss prevention, user behavior analytics, and file encryption capabilities to mitigate the risks introduced with cloud application usage.
  •  Cloud Application Data Control and Threat Protection (“Securlets”) protects cloud accounts, controls user activity and governs data within cloud accounts through direct API integration with cloud applications. Securlets also enable incident response and forensics to monitor, log and capture activities that occur within cloud applications.
  •  Cloud Data Protection allows you to encrypt or tokenize cloud data to assure compliance with data residency laws and other compliance regines. It works with public cloud SaaS applications like ServiceNow, Salesforce, and Oracle. CDP intercepts sensitive data while it is still on-premises and replaces it with a random tokenized or encrypted value, rendering it meaningless should anyone outside of the company access the data while it is being processed or stored in the cloud.

Encrypted Traffic is not the only blind spot, OCR is the new kid on the block (DLP 14.5 Form Matching Technology)

$
0
0

Encrypted Traffic is not the only blind spot, OCR is the new kid on the block, especially where egress traffic is concerned. I would say not just OCR, there are several new blind spots that have emerged, in recent years. Moreover, these blind spots are not due to a lacuna/some sort of a threat vector. This is solely due to the emergence of technology without adequate supplementary controls in place that can act as a plug for enforcement (both public and private; company owned). Remember the Communications Assistance for Law Enforcement Act (CALEA); a United States wiretapping law passed in 1994, during the presidency of Bill Clinton (Pub. L. No. 103-414, 108 Stat. 4279, codified at 47 USC 1001-1010). CALEA's purpose is to enhance the ability of law enforcement agencies to conduct electronic surveillance by requiring that telecommunications carriers and manufacturers of telecommunications equipment modify and design their equipment, facilities, and services to ensure that they have built-in surveillance capabilities, allowing federal agencies to wiretap traffic. From the E-discovery perspectives even organization are required to be compliant to it. Well, enough elaboration of the problem statement I suppose. Now let’s talk about some of the possible solutions available from the technology perspective.

Forward Trust or in simpler terms SSL Proxy is one such great technology which puts us in, a lot of ease, especially in order to be able to manage egress (internal to external) traffic. In other words we are setting up a proxy within proxy or a proxy with the advanced capability of being able to act as a legitimate man-in-the-middle (MITM) also could be referred to as an escrow for SSL. There are few technologists that also document this feature simply as a SSL gateway. No matter what the name is finalized/coined, is a great leap on its own. However the channel of encrypting malicious traffic has been the best safe house for both internal and external threat for a long time. This had almost put them into a habit of this type of a vulnerability and its usage especially for data theft. Now that with the emergence of players like Palo Alto and Bluecoat, this SSL safe house issue is slowly being mitigated. Hence, now attackers are exploring different avenues that work in a similar fashion (that provides them a safe/bunker house just while they pass the check posts – perimeter security devices. Common examples includes: encrypting a file full of PCI data and sending it to an external unauthorized party, so that the DLP SMTP Scanner cannot detect it.

Absolutely! I’m talking about the OCR technology. Instead of getting into the blind/shadow zone of SSL which obviously encrypts traffic and makes it unreadable for the devices at the egress gateway/perimeter. This is even applicable for ingress traffic too wherein the IDS/IPS capability are limited if the traffic is not plain text. Now OCR is not encryption ofcourse, but it allows the attacker to freeze the code, files, IP Data, to absolutely stand still as if it’s a statue. Then when passed the gateway and all the array of monitoring devices transform itself back in an executable/specialized application file and execute its payload.

Usually the solution is simple and same. The plan-text content is made available for analysis. For OCR this would mean that we either:

  • Convert traffic from OCR to plan-text in a copy of the traffic (either a physical tap or port mirroring) and detect
  • Convert traffic from OCR to plan-text, if found a problem, block/take action else convert it back to OCR (retain original copy, etc.)

Some commentary for the DLP Solution Architecture. Symantec DLP 14.5's, Form Matching technology is a winner here! Solves a good number of OCR like use cases. But better news is that, Symantec DLP 14.5's OCR capability can natively integrate with a number of OCR technologies such as ABBY without issues. At the moment, these solutions are easily integrated with the help SDKs and integration tools. 

Configure SMS to download Virus Definitions from internal LUA

$
0
0

The organization's Exchange servers are isolated from the internet and the SMS installed on these Exchange servers cannot successfully retrieve new AntiVirus definitions via Symantec LiveUpdate. How to ensure definitions will be updated without internet access? 

You can configure the SMS to download virus definitions from an internal LUA server.

Here are the detailed steps:

1. Log into LUA, select 'Configure' --> 'My Symantec Products', click 'Add New Products':

udpateSMSbyLUA-001.png

2. From the products list, click to select 'Symantec Mail Security for Microsoft Exchange':

udpateSMSbyLUA-002_0.png

3. Select the product version, the SMSMSE 7.0 is used both for 7.0 and 7.5:

udpateSMSbyLUA-003.png

4. Select 'Configure' --> 'Distribution Centers' --> 'Default Production Distribution Center', click 'Edit' button:

udpateSMSbyLUA-005.png

5. Under the 'Product List', click 'Add' button:

udpateSMSbyLUA-006.png

6. Select SMS, click 'OK' button to add it into the product list of the distribution center:

udpateSMSbyLUA-007.png

7. Select 'Download & Distribute' --> 'Schedule', click 'Add Distribution' button:

udpateSMSbyLUA-009.png

8. Name this distribution schedule, then click 'Add' button:

udpateSMSbyLUA-010.png

9. Select SMS, click 'Add' button to add it into the distribute schedule:

udpateSMSbyLUA-011.png

10. Select the distribute schedule as 'After Download Schedule', click 'OK' button to save the distribute schedule:

udpateSMSbyLUA-012.png

11. After save the distribution schedule, click 'Add Download':

udpateSMSbyLUA-013.png

12. Select SMS, click 'Add' button to add it into the download schedule:

udpateSMSbyLUA-014.png

13. Set the download timer, select 'Run selected Distribution Schedules automatically after this download completes', then add the 'Default Distribution' into the distribution schdule, click 'OK' button to save the download schedule:

udpateSMSbyLUA-015.png

14. After save the schedule, you can click to run the download and distribution manually:

udpateSMSbyLUA-016.png

15. Confirm the download and distribution run successfully:

udpateSMSbyLUA-017.png

16. Select 'Configure' --> 'Client Settings', click to select 'Default client settings from production environment', then click 'Export Windows Settings':

udpateSMSbyLUA-018.png

17. Save the LUA setting file:

udpateSMSbyLUA-019.png

18. This setting file contains the parmeters of the LUA server, just like this:

udpateSMSbyLUA-020.png

19. Copy this file into the folder of SMS LiveUpdate:

C:\ProgramData\Symantec\LiveUpdate

The default LiveUpdate configuration file of SMS named Settings.LiveUpdate

You must renamed the copied LiveUpdate setting filel into Settings.LiveUpdate:

udpateSMSbyLUA-021.png

20. Log into SMS management console, select 'Admin' --> 'LiveUpdate/Rapid Release Status', click /Run LiveUpdate Certified Definitions':

udpateSMSbyLUA-022.png

21. Confirm the command execute successfully:

udpateSMSbyLUA-023.png

22. The status of the LiveUpdate will be running:

udpateSMSbyLUA-024.png

23. Wait for several minutes, confirm the status of the LiveUpdate from SMS management console:

udpateSMSbyLUA-025.png

Configure Active Directory Policy in SEE

$
0
0

Symantec Endpoint Encryption provides the following types of policies that you create from the Management Console: 

■ Install-time policies
■ Active Directorypolicies
■ Native policies

About install-time policies

Install-time policies are the default policies set when you create the Management Agent, Drive Encryption, and Removable Media Encryption client installers through the Management Console. You can modify an install-time policy by deploying the updated policy options that you defined using Active Directory or native policies. Active Directory and native policy settings take precedence over any installation settings on the client.

ADPolicySEE-10_0.png

About native policies

Native policies are designed for deployment to computers that Active Directory does not manage. If you want to deploy native policies to computers that Active Directory manages, turn off the synchronization with Active Directory.

ADPolicySEE-11.png

About Active Directory policies

Active Directory policies are known as Group Policy Objects (GPOs). They are designed for deployment to the computers that reside within your Active Directory forest or domain. You can create and deploy Active Directory policies whether synchronization with Active Directory is enabled or disabled.

Here are the steps to configure Active Directory policy in SEE:

1. Log into SEE Manager, right click 'Group Policy Management', select 'Add Forest':

ADPolicySEE-01.png

2. Input the Active Directory Domain name, click OK:

ADPolicySEE-02.png

3. SEE will fetch the structure of the Domain: 

ADPolicySEE-03.png

4. Right click the OU that you want to create and assign the SEE policy, select 'Create a GPO in this domain, and Link it here':

ADPolicySEE-04.png

5. Input the name of the GPO, click OK:

ADPolicySEE-05.png

6. Right click the newly created GPO, select 'Edit':

ADPolicySEE-06.png

7. Expand the 'Computer Configuration' --> 'Policies' --> 'Software Settings' --> 'Symantec Endpoint Encryption', you will find out all the SEE policy configurations:

ADPolicySEE-07.png

8. Modify and save these configurations, they will be applied to the computers reside with the Active Directory automatically.

Step By Step How to Create Email Alert for Detection Server Monitor Service is stopped

$
0
0

How to Create Email Alert for Detection Server Monitor Service is stopped

Symantec DLP 14.5

Note: Please see the attachement for full article with screenshots

Steps

  1. Open Enforce Server console from browser then Click onSystem >System and Detectors > Alerts

  2. Click on Add Alert

  3. Enter the Alert name in Alert name field

  4. Enter the Description in Description field

  5. Click on Add Condition

  6. Select Event Code in ConditionDrop down menu

  7. Select Is Any Of for condition to apply

  8. Enter the code for Monitor Service stop (You can find these event codes in System Serves and Detectors > Events as it show event codes whenever a specific event it records)

  9. For a specific Detection Server Alert Click on Add Condition

  10. Select Server and Detector and Is Any Of in Conditions and then Select the Specific Server you want to generate email alert for (If you do not select any specific server then It will give alerts if any one of detection server’s Monitor Service stops )

  11. Enter the email addresses under Actions Send Email Notifications in Recipient(s) field (you can enter more than one email addresses by separating with comma)

  12. Enter the max email notification you want to send in an hour in Max Per Hour field

  13. Click on Save to save the Alert

    Test the Alert to confirm its working

SNAC LAN Enforcement - Do not see the 'Authentication' tab to enable IEEE 802.1x Authentication on a Windows 7 Client

$
0
0

If you do not see the 'Authentication' tab to enable IEEE 802.1x Authentication on a Windows 7 Client - there is not even a slightest reson to worry/ponder.

802.1x.png

This is because Windows 7 (enterprise builds ofcourse) bydefault fully supports IEEE 802.1x Authentication for both wired and wireless access. The better part is that you could turn either one (between wired and wireless) or both (wired and wireless) ON in terms of IEEE 802.1x Authentication, as per your requirement.

Bydefault this is disabled on Windows 7 systems & it requires us to enable the below services either manually or through a group policy/systems' management tool in your environment.

Please see below for detailed steps:

 

For Wired - Enable Wired AutoConfig on your Computer

  1. Click the Start button and type services.msc into the search box.
     
  2. In the services window locate the service named Wired AutoConfig.


     
  3. Right click on this service and click on properties.
     
  4. Select the Startup Type Automatic and press OK.


     

  5. Reboot your computer for the changes to take effect.

 

For Wireless - Enable WLAN AutoConfig on your Computer

  1. Click the Start button and type services.msc into the search box.
     
  2. In the services window locate the service named WiLAN AutoConfig.
     
  3. Right click on this service and click on properties.
     
  4. Select the Startup Type Automatic and press OK.
     
  5. Reboot your computer for the changes to take effect.

 

NOW, after Enabling Wired/WLAN AutoConfig -->


SNAC LAN Enforcement: Prerequisites for Configuring IEEE 802.1X Port-Based Authentication in NON-TRANSPARENT MODE

$
0
0

SNAC LAN Enforcement: Prerequisites for Configuring IEEE 802.1X Port-Based Authentication in NON-TRANSPARENT MODE

Cisco mandated tasks

The following Cisco mandated tasks must be completed before implementing the IEEE 802.1X Port-Based Authentication feature:

  • IEEE 802.1X must be enabled on the device port.
  • The device must have a RADIUS configuration and be connected to the Cisco secure access control server (ACS). You should understand the concepts of the RADIUS protocol and have an understanding of how to create and apply access control lists (ACLs).
  • EAP support must be enabled on the RADIUS server.
  • You must configure the IEEE 802.1X supplicant to send an EAP-logoff (Stop) message to the switch when the user logs off. If you do not configure the IEEE 802.1X supplicant, an EAP-logoff message is not sent to the switch and the accompanying accounting Stop message is not sent to the authentication server. See the Microsoft Knowledge Base article at the location http:/​/​support.microsoft.com and set the SupplicantMode registry to 3 and the AuthMode registry to 1.
  • Authentication, authorization, and accounting (AAA) must be configured on the port for all network-related service requests. The authentication method list must be enabled and specified. A method list describes the sequence and authentication method to be queried to authenticate a user. See the IEEE 802.1X Authenticator feature module for information.
  • The port must be successfully authenticated.

The IEEE 802.1X Port-Based Authentication feature is available only on Cisco 89x and 88x series integrated switching routers (ISRs) that support switch ports.

Note: Optimal performance is obtained with a connection that has a maximum of eight hosts per port.

The following Cisco ISR-G2 routers are supported:

  • 1900
  • 2900
  • 3900
  • 3900e

The following cards or modules support switch ports:

  • Enhanced High-speed WAN interface cards (EHWICs) with ACL support:
    • EHWIC-4ESG-P
    • EHWIC-9ESG-P
    • EHWIC-4ESG
    • EHWIC-9ESG
  • High-speed WAN interface cards (HWICs) without ACL support:
    • HWIC-4ESW-P
    • HWIC-9ESW-P
    • HWIC-4ESW
    • HWIC-9ES

Note: Module Compatibility with a Specific Router Platform see: Cisco EtherSwitch Modules Comparison
http://www.cisco.com/en/US/products/ps5854/products_qanda_item0900aecd802a9470.shtml

To determine whether your router has switch ports that can be configured with the IEEE 802.1X Port-Based Authentication feature, use the show interfaces switchport command.

Restrictions for IEEE 802.1X Port-Based Authentication

IEEE 802.1X Port-Based Authentication Configuration Restrictions

  • The IEEE 802.1X Port-Based Authentication feature is available only on a switch port
  • If the VLAN to which an IEEE 802.1X port is assigned is shut down, disabled, or removed, the port becomes unauthorized. For example, the port is unauthorized after the access VLAN to which a port is assigned shuts down or is removed.
  • When IEEE 802.1X authentication is enabled, ports are authenticated before any other Layer 2 or Layer 3 features are enabled.
  • Changes to a VLAN to which an IEEE 802.1X-enabled port is assigned are transparent and do not affect the switch port. For example, a change occurs if a port is assigned to a RADIUS server-assigned VLAN and is then assigned to a different VLAN after reauthentication.
  • When IEEE 802.1X authentication is enabled on a port, you cannot configure a port VLAN that is equal to a voice VLAN.
  • This feature does not support standard ACLs on the switch port.
  • The IEEE 802.1X protocol is supported only on Layer 2 static-access ports, Layer 2 static-trunk ports, voice VLAN-enabled ports, and Layer 3 routed ports.
  • The IEEE 802.1X protocol is not supported on the following port types:
    • Dynamic-access ports—If you try to enable IEEE 802.1X authentication on a dynamic-access (VLAN Query Protocol [VQP]) port, an error message appears, and IEEE 802.1X authentication is not enabled. If you try to change an IEEE 802.1X-enabled port to dynamic VLAN assignment, an error message appears, and the VLAN configuration is not changed.
    • Dynamic ports—If you try to enable IEEE 802.1X authentication on a dynamic port, an error message appears, and IEEE 802.1X authentication is not enabled. If you try to change the mode of an IEEE 802.1X-enabled port to dynamic, an error message appears, and the port mode is not changed.
    • Switched Port Analyzer (SPAN) and Remote SPAN (RSPAN) destination ports—You can enable IEEE 802.1X authentication on a port that is a SPAN or RSPAN destination port. However, IEEE 802.1X authentication is disabled until the port is removed as a SPAN or RSPAN destination port. You can enable IEEE 802.1X authentication on a SPAN or RSPAN source port.
  • Configuring the same VLAN ID for both access and voice traffic (using the switchport access vlan vlan-id and the switchport voice vlan vlan-id commands) fails if authentication has already been configured on the port.
  • Configuring authentication on a port on which you have already configured switchport access vlan vlan-id and switchport voice vlan vlan-id fails if the access VLAN and voice VLAN have been configured with the same VLAN ID.
  • By default, authentication system messages, MAC authentication by-pass system messages and 802.1x system messages are not displayed. If you need to see these system messages, turn on the logging manually, using the following commands:
    • authentication logging verbose
    • dot1x logging verbose
    • mab logging verbose

For more/specific details pertaining to Cisco pre-requisites (to be able to integrate SNAC with Cisco devices), please refer to the link below:

http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_usr_8021x/configuration/xe-3se/3850/sec-user-8021x-xe-3se-3850-book/config-ieee-802x-pba.html#GUID-B1C1F75B-45CF-4CA3-A833-43D7C6986249

SNAC Gateway/LAN Enforcement: Failed to receive an authentication reply from the RADIUS server (Reversible Password Encryption Disabled)

$
0
0

SNAC Gateway/LAN Enforcement: Failed to receive an authentication reply from the RADIUS server (Reversible Password Storage Disabled)

Before proceeding further with the discussion of this issue, lets all agree that this issue is not limited to the Symantec NAC Solution. So at no point in time, it need to be perceived that the requirement to enable Reversible Password Encryption for AD is a Symantec specific requirement. May it be Nevis, Napera, Aruba, Bradford, Cisco, Juniper or Forescout, we need a RADIUS implementation that supports ms-chap-v2 to continue to use encrypted passwords. It needs to be an ms-chap hash to compare them. If not, then Windows needs the passwords in to Reversible Password Encryption. How can it know if the right password was put in if its can't get it in its ms-chap native format.

The Store password using reversible encryption policy setting provides support for applications that use protocols that require the user's password for authentication. Storing encrypted passwords in a way that is reversible means that the encrypted passwords can be decrypted. A knowledgeable attacker who is able to break this encryption can then log on to network resources by using the compromised account. For this reason, never enable Store password using reversible encryption for all users in the domain unless application requirements outweigh the need to protect password information.

If you use the Challenge Handshake Authentication Protocol (CHAP) through remote access or Internet Authentication Services (IAS), you must enable this policy setting. CHAP is an authentication protocol that is used by remote access and network connections. Digest Authentication in Internet Information Services (IIS) also requires that you enable this policy setting.

Fulfiling this requirement would stop Enforcer's user.log dialoging its failed attempt to receive an authentication reply from the RADIUS server. This would resultantly stop the RADIUS packets timeing out when the Enforcer forwards the authentication request from the authenticator

You can enable additional secure channel events by changing the following registry key value from 1 (REG_DWORD type, data 0x00000001) to 3 (REG_DWORD type, data 0x00000003) to ensure that issue is completely resolved after making the required changes:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\EventLogging

This issue is mainly seen with:

  1. Network switch with 802.1x enabled, role is Authenticator.
  2. Symantec Network Access Control (SNAC) Enforcer, check the endpoint security and compliance posture.
  3. Remote Authentication Dial-in User Server (RADIUS)   / Network Access Protection (NAP), checks the customer Directory server for the user or computer authentication.

DLP Hot Backups failing after upgrade to 14.x (Oracle to 11.2.0.4)

$
0
0

DLP Hot Backups failing after upgrade to 14.x (Oracle to 11.2.0.4) on tools like netbackup, backupexec, commvault, etc.

The below error is seen:

Failure Reason: ERROR CODE [82:127]:
Network send failed: Software caused connection abort Source: DLPSERVERNAME,
Process: clBackup

In certain cases, even the below error is seen:

ERROR CODE [19:1335]:
Oracle Backup [CVImpersonateLoggedOnUser() failed for oraUser=[protect]
ntDomain=[DLPSERVER] m_hToken=2bc.]
Source: DLPSERVER, Process: ClOraAgent

On further investigation its found the Oracle Home (in the Backup Tool) still pointing to the Old Oracle Home Location. (This is since, as recommended, for DLP we install a new Oracle instance/version and re-point variables). Hence, the Oracle Home needs a change in/for all tools (includes Backups, Monitoring, Tripwire, etc.)

Ora-Capture_2.PNG

Edit the Oracle Path to the New Path = Drive:\Oracle\Product\11.2.0.4\db_1

This should help in resolving both the above listed errors.

() Additional Notes on this Topic:

What is Oracle Home w.r.t DLP?

The Oracle base location is the location where Oracle Database binaries are stored. During installation, you are prompted for the Oracle base path. Typically, an Oracle base path for the database is created during Oracle Grid Infrastructure installation.

To prepare for installation, Oracle recommends that you only set the ORACLE_BASE environment variable to define paths for Oracle binaries and configuration files. Oracle Universal Installer (OUI) creates other necessary paths and environment variables in accordance with the Optimal Flexible Architecture (OFA) rules for well-structured Oracle software environments.

For example, with Oracle Database 11g, Oracle recommends that you do not set an Oracle home environment variable allow OUI to create it instead. If the Oracle base path is /u01/app/oracle, then by default, OUI creates the following Oracle home path:

What are Offline (Cold) Backups w.r.t. DLP

An offline cold backup is a physical backup of the database after it has been shutdown using the SHUTDOWN NORMAL command. If the database is shutdown with the IMMEDIATE or ABORT option, it should be restarted in RESTRICT mode and then shutdown with the NORMAL option. An operating system utility is used to perform the backup. For example, in Unix you could use cpio, tar, dd, fbackup or some third party utility. To have a complete cold backup the following files must be backed up.

  • All datafiles
  • All control files
  • All online redo log files (optional)
  • The init.ora file (can be recreated manually)

The location of all database files can be found in the data dictionary views, DBA_DATA_FILES, V$DATAFILE, V$LOGFILE and V$CONTROLFILE. These views can be queried even when the database is mounted and not open.

A cold backup of the database is an image copy of the database at a point in time. The database is consistent and restorable. This image copy can be used to move the database to another computer provided the same operating system is being used. If the database is in ARCHIVELOG mode, the cold backup would be the starting point for a point-in-time recovery. All archive logfiles necessary would be applied to the database once it is restored from the cold backup. Cold backups are useful if your business requirements allow for a shut-down window to backup the database. If your database is very large or you have 24x7 processing, cold backups are not an option, and you must use online (hot) backups.

What are Online (Hot) Backups w.r.t DLP

When databases must remain operational 24 hours a day, 7 days a week, or have become so large that a cold backup would take too long, Oracle provides for online (hot) backups to be made while the database is open and being used. To perform a hot backup, the database must be in ARCHIVELOG mode. Unlike a cold backup, in which the whole database is usually backed up at the same time, tablespaces in a hot backup scenario can be backed up on different schedules. The other major difference between hot and cold backups is that before a tablespace can be backed up, the database must be informed when a backup is starting and when it is complete. This is done by executing two commands:

Alter tablespace tablespace_name begin backup;
Perform Operating System Backup of tablespace_name datafiles
Alter tablespace tablespace_name end backup;

At the conclusion of a hot backup, the redo logs should be forced to switch and all archived redo log files and the control file should also be backed up, in addition to the datafiles The control file cannot be backed up with a backup utility. It must be backed up with the following Oracle command in server manager:

Alter database backup controlfile to 'file_name';

Symantec Endpoint Protection v14.01 (MP1) has been released!

$
0
0

A new year. A new SEP v14 release! :D

Looks like Symantec has been busy with squashing the bugs from the first release of SEP v14 and the list of bugs resolved is impressive. (link below)

I often wait off with the 1st major release (only for testing) until the next minor release and this is now my opportunity to upgrade our production network with this release. Anyone with similar testing experience as mine?

Here are the documents for some light readings:

Symantec™ Endpoint Protection v14 MP1 Release Notes - https://support.symantec.com/en_US/article.DOC9698.html

Supported upgrade paths to Symantec™ Endpoint Protection v14 MP1

https://support.symantec.com/en_US/article.HOWTO81070.html

Symantec™ Endpoint Protection v14 Installation and Administration Guide - https://support.symantec.com/en_US/article.DOC9449.html

Upgrade best practices for Endpoint Protection v14 - https://support.symantec.com/en_US/article.HOWTO125386.html

Symantec™ Endpoint Protection Quick Start Guide - https://support.symantec.com/en_US/article.DOC8227.html

What's new in v14 - https://support.symantec.com/en_US/article.HOWTO124730.html

New fixes and component versions in Symantec™ Endpoint Protection v14 MP1 - https://support.symantec.com/en_US/article.INFO4193.html

Database schema reference for Endpoint Protection 14 - http://www.symantec.com/docs/DOC9438

So where can you grab the latest version from? You can download it from the usual place which is https://symantec.flexnetoperations.com using your serial number (beginning with Mxxxxxxxx) - Please note: you cannot use your existing v12.1 serial number to access this. You will need to use a new serial which was sent out to all existing v12.1 users (Upgrade Notification e-mail) - if you have not received this e-mail, please contact Symantec Licensing Support to get the new serial number.

As mentioned earlier, I’m starting with the planning to migrate to this version very soon. What about you? Are you going to upgrade straightaway, or needs to plan first? Our setup have SVA and it’s now no longer supported, so something to be aware of.

Share your upgrade experience!

Sign and Symptoms that your DLP Enforce is overloaded

$
0
0

There are several questions people ask when as a Consultant/Architect you visit them to provide services. There are environment which are adequately staffed in terms of hardware wherein there are those which are not. Sizing concerns occur in both. The local administration teams need a way out in both the cases. Of Couse, the answer is simple if you do not have enough hardware resources, part of the existing resources and workforce as a whole. The answer certainly is, buy more and better hardware which would be the key to a happy life, all in all.

However this article is specifically for environments which already have the required hardware in place. In other words, the RAM, the CPU cores, the hard disk space and everything is pretty much fallen in line. Even in that case, there are performance concerns. Ofcouse there is another known category here which is 'misconfiguration'. Yes, I'm certainly talking about environments that inspect GET traffic instead of just post and places that have policies implemented to look for all traffic irrespective of content going to certain destinations. Not to forget the inappropriate/excessive usage of wildcards. However this article is not even to cover those types.

Here's what I wish to cover under this article - Enough hardware available physically however the DLP Enforce Application/Services not configure to utilize the same effectively.

Lets start with the Signs of Symptoms of this category as a whole:

(a) Takes a long time for Enforce to load

(b) Report generation timing out or taking long

(c) Certain operations/edits timing out or taking long

(d) RSOD (Red screen of death) while performing certain operations/configuration changes

(e) Below log entries in the VontuManager.log (under debug)

•INFO   | jvm 1    | 2017/01/010 07:07:27 | Exception in thread "HeartbeatCheckerTimer" com.vontu.model.DatabaseConnectionException: org.apache.ojb.broker.PersistenceBrokerException: Used ConnectionManager instance could not obtain a connection

•INFO   | jvm 1    | 2017/01/05 06:13:42 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 06:57:17 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 06:57:23 | line 1:71: unexpected token: null

•INFO   | jvm 1    | 2017/01/05 07:09:27 | Caused by: org.apache.ojb.broker.PersistenceBrokerException: Used ConnectionManager instance could not obtain a connection

•INFO   | jvm 1    | 2017/01/05 07:09:27 | Caused by: org.apache.ojb.broker.accesslayer.LookupException: Could not get connection from DBCP DataSource

•INFO   | jvm 1    | 2017/02/04 22:40:29 | [org.apache.ojb.broker.accesslayer.ConnectionFactoryDBCPImpl] WARN: Connection close failed

•INFO   | jvm 1    | 2017/02/04 22:40:29 | Already closed.

•INFO   | jvm 1    | 2017/02/04 22:40:29 | java.sql.SQLException: Already closed.

Now lets look at what seems like the Solution, that would enable us to allow more memory (heap size) for JVM in order to resolve all the above Compliants/Symptoms

Now, under Vontu\Protect\Config there is a configuration file for each service which marks the amount of RAM the heap size could use. Extend the below value to leverage some of the existing/unused hardware on the server to improve/tune performance for enforce as a whole:

# Initial Java Heap Size (in MB)


wrapper.java.initmemory = 4096

wrapper.java.maxmemory = 8192

Preventing PowerShell from running via Office

$
0
0

Microsoft’s PowerShell has lately been a tool of choice for malware distributors- the trend has only increased since December 2016’s white paper PowerShell threats surge: 95.4 percent of analyzed scripts were malicious.  Too often, end users tricked into opening a malicious attachment will find this powerful tool turned against them.  The ultimate payload downloaded by PowerShell is usually Ransomware.  Once downloaded and run:

***** YOUR FILES HAVE BEING ENCRYPTED *****

Now your organization’s data is lost, unless you have a healthy backup.

 

Application And Device Control: An Excellent Extra Line of Defense

Using Symantec Endpoint Protection’s optional Application And Device Control component, it is possible to prevent malicious Word, Excel or other Office document attachments from accessing PowerShell or cmd.  Here’s a guide illustrating how to craft such a policy yourself….

2017-02-17 10_27_58-10.148.196.246 - Remote Desktop Connection.png

2017-02-17 15_25_33-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_25_50-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_26_37-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_30-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_53-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

Or, find the attached policy that can be implemented and tested in your environment.  Please note that this “Blocking PowerShell.dat” file is provided “as is.” We strongly recommend that it be trialed first in a controlled test environment before applying the policy throughout the organization!  Also note that this is one extra layer of defense- it further reduces the risk f a malware infection, but cannot guarantee eliminating all possibility of damage.

 

More MUST READ Articles and Documents

Hardening Your Environment Against Ransomware

https://www.symantec.com/connect/articles/hardening-your-environment-against-ransomware

Support Perspective: W97M.Downloader Battle Plan

https://www.symantec.com/connect/articles/support-perspective-w97mdownloader-battle-plan

 

REPORT: Organizations must respond to increasing threat of ransomware

https://www.symantec.com/connect/blogs/report-organizations-must-respond-increasing-threat-ransomware

 

Ransomware removal and protection with Symantec Endpoint Protection

https://support.symantec.com/en_US/article.HOWTO124710.html

Best Practices for Deploying Symantec Endpoint Protection's Application and Device Control Policies
http://www.symantec.com/docs/TECH145973

So many Thanks to mick2009 from reviewing this article!

Preventing PowerShell from running via Office

$
0
0

Ultimamente o Powershell tem sido uma das ferramentas escolhidas para distribuição de malware - Essa onda tem crescido de acordo com o WhitePaper de dezembro de 2016. PowerShell threats surge: 95.4 percent of analyzed scripts were malicious.  Frequentemente, os usuários finais são levados a abrir um anexo malicioso e se deparam com uma poderosa ferramenta voltada contra eles. Os payloads baixados pelo Powershell são na maioria Ransomware. Uma vez baixado e executado:

***** YOUR FILES HAVE BEING ENCRYPTED *****

Agora os dados da sua organização estão perdidos. A não ser que você tem um bom backup.

 

Application And Device Control: Uma excelente camada de defesa adicional

Utilizando o recurso de Application and Device Control do Symantec Endpoint Protection é possível previnir que anexos maliciosos do Word, Excel ou outro documento Office acessem o PoweShell ou cmd. Aqui está um guia ilustrado de como fazer a política:

2017-02-17 10_27_58-10.148.196.246 - Remote Desktop Connection.png

2017-02-17 15_25_33-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_25_50-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_26_37-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_30-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

2017-02-17 15_28_53-10_148_197_25 - 10.148.197.25 - Remote Desktop Connection.png

Ou, Verifique a política em anexo que pode ser testada e implementada no seu ambiente. Recomendamos fortemente que ela seja testada em um ambiente controlado antes de ser aplicada em produção. Note que esta esta é uma camada extra de proteção. Isto reduzirá o risco de infecção de malware, mas, não pode garantir todas as possibilidades de perigo.

 

More MUST READ Articles and Documents

Hardening Your Environment Against Ransomware

https://www.symantec.com/connect/articles/hardening-your-environment-against-ransomware

Support Perspective: W97M.Downloader Battle Plan

https://www.symantec.com/connect/articles/support-perspective-w97mdownloader-battle-plan

 

REPORT: Organizations must respond to increasing threat of ransomware

https://www.symantec.com/connect/blogs/report-organizations-must-respond-increasing-threat-ransomware

 

Ransomware removal and protection with Symantec Endpoint Protection

https://support.symantec.com/en_US/article.HOWTO124710.html

Best Practices for Deploying Symantec Endpoint Protection's Application and Device Control Policies
http://www.symantec.com/docs/TECH145973

Muito obrigado ao mick2009 pela revisão deste artigo!


Security Advisories on SEP 12.1 RU6 MP6 and also SEP v14.0 (6th March 2017)

$
0
0

Just received an alert on an Security Advisories for the following products:

* SEP v12.1 RU6 MP6 and earlier
* SEP v14.0

The security advisories are:

CVE-2016-9093 - Local Privilege Escalation Vulnerability
http://www.securityfocus.com/bid/96294

CVE-2016-9094 - Local Command Injection Vulnerability
http://www.securityfocus.com/bid/96298

If you're on SEP v12, you're strongly recommended to upgrade to SEP v12.1 RU6 MP7. And if you're on v14, you're strongly recommended to upgrade to SEP v14.0 MP1.

Symantec has done an write up on this, which you can find at https://www.symantec.com/security_response/securit...

Get patching, guys/gals! :)

SNAC LAN Enforcement: Switch performance/throughput dropped after enabling 802.1x

$
0
0

Mostly during our SNAC/NAC 802.1x implementations, we used to sign-off the deployment & leave the city the same day. Next day (and this is almost becoming a trend) we get calls/complaints about Switch performance/throughput having dropped considerably after SNAC/NAC deployment. Their gut feeling was always to contact Cisco for a hardware upgrade and that Symantec to provide the input for sizing/hardware enhancement.

Hence, writing this article, to in way highlight the fact, that mostly this issue has/had turned out to be with STP (802.1D) configuration more than a sizing gap.

Please read further details if you're somehow sailing or have had sailed in the same boat:

Problem Stament:

The IEEE 802.1D Spanning Tree Protocol (STP) in part of the Industry since 1985. STP we know is a L2 protocol that runs between bridges to help create a loop-free network topology. Bridge Protocol Data Units (BPDUs) are packets sent between Ethernet switches (essentially multi-port bridges) to elect a root bridge, calculate the best path to the root and block any ports that create loops. The resulting tree, with the root at the top, spans all bridges in the LAN, hence the name: spanning tree.

STP is the most efficient means for preventing loops, atleast with default and most simple configuration settings. Thus, it is easy to not to tune parameters and accept the defaults. This leads STP network without a proper designs and especially when SNAC is implemented and 802.1x is enabled we all are surprised to discover the network issues related to spanning tree.

There are several aspects which could go wrong in terms of STP, however I would like to focus on the most common (default configuration on Cisco Swithces) is the "No Manual Root Bridge Configured"

No Manual Root configuration itself represents lack of STP architecture design. This leaves all switches in the environment using the default root bridge priority of 32768. If all switches have the same root bridge priority, the switch with the lowest MAC address will be elected as the root bridge.

Many networks have not been configured with a single switch to have a lower root bridge priority which would force that core switch to be elected as the STP root for any or all VLANs.

Point to Poner - Isn't it common for the lowest MAC generally to be older/low-end hardware?

Anyways, it is possible that a small access-layer switch with a low MAC address could be the STP root. This situation would add some performance overhead and make for longer convergence times because of the root bridge reelection.

Resolution:

When enabling SNAC & 802.1x configure the core switches with lower STP priorities so that one will be the root bridge and any other core bridges will have a slightly higher value and take over should the primary core bridge fail. Having "tiered" STP priorities configured on the switches determines which switch should be root bridge in the event of a bridge failure. This makes the STP network behave in a more deterministic manner.

 
On the core Cisco switch you would configure the primary root switch with this command:

Switch1(config)# spanning-tree vlan 1-4096 root primary

On the core Cisco switch you would configure the secondary root switch with this command:

Switch2(config)# spanning-tree vlan 1-4096 root secondary

The net effect from these two commands will set the primary switch root bridge priority to 8192, and the secondary switch root bridge priority to 16384.

- If you are facing a congestion issue after NAC deployment even after configuring a manual root - feel free to reach me & I'll try to partner along in helping find a Solution.

SNAC LAN Enforcement: Switch performance/throughput dropped (RSTP not enabled)

$
0
0

It common especially, some of the newer featured to not be configured no the switch. Use of IEEE 802.1D and not Rapid-STP is one such common examples which greatly affects the SNAC Implementations.

This article is the second steps in identifying/fixing performance/throughput issues on a switch after a SNAC deployment. The first step ofcouse is to configure the legacy STP (which is the fall-back/legacy support). For more details on/pertaining to STP, refer to another article below:

https://www.symantec.com/connect/articles/snac-lan-enforcement-switch-performancethroughput-dropped-after-enabling-8021x

The classic IEEE 802.1D protocol has the following default timers: 15 seconds for listening, 15 seconds for learning, 20 second max-age timeout. All switches in the spanning tree should agree on these timers and you are discouraged from modifying these timers. These older timers may have been adequate for networks 10 to 20 years ago, but today this 30 to 50 seconds of convergence time is far too slow especially for SNAC implementations

Today, many switches are capable of Rapid Spanning Tree Protocol (IEEE 802.1w), but few network administrators have enabled it. RSTP vastly improves convergence times by using port roles, using a method of sending messages between bridges on designated ports, calculating alternate paths, and using faster timers. Therefore, organizations should use RSTP when they can. If your organization still has switches that cannot run RSTP, don't worry, the RSTP switches will fall back to traditional 802.1D operation for those interfaces that lead to legacy STP switches.

The 802.1D Spanning Tree Protocol (STP) standard was designed at a time when the recovery of connectivity after an outage within a minute or so was considered adequate performance. With the advent of Layer 3 switching in LAN environments, bridging now competes with routed solutions where protocols, such as Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP), are able to provide an alternate path in less time.

Cisco enhanced the original 802.1D specification with features such as Uplink Fast, Backbone Fast, and Port Fast to speed up the convergence time of a bridged network. The drawback is that these mechanisms are proprietary and need additional configuration.

Rapid Spanning Tree Protocol (RSTP; IEEE 802.1w) can be seen as an evolution of the 802.1D standard more than a revolution. The 802.1D terminology remains primarily the same. Most parameters have been left unchanged so users familiar with 802.1D can rapidly configure the new protocol comfortably. In most cases, RSTP performs better than proprietary extensions of Cisco without any additional configuration. 802.1w can also revert back to 802.1D in order to interoperate with legacy bridges on a per-port basis. This drops the benefits it introduces.

Ref:http://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/24062-146.html

How to use a deployment tool to push packages on a system with System Lockdown enabled?

$
0
0

I would continue from the point where we left with knowing what FILE FINGERPRINT in SEP is and how to generate a FILE FINGERPRINT using the checksum.exe, how to edit, append or merge a FILE FINGERPRINT.

Now lets look at how to configure a SYSTEM LOCKDOWN which is a protection setting that you can use to control the applications that can run on the client computer

Previous Articles:

What is "FILE FINGERPRINT LIST" in Symantec Endpoint Protection (SEP)?
https://www-secure.symantec.com/connect/articles/what-file-fingerprint-list-symantec-endpoint-protection-sep

Is it possible to EDIT, APPEND or MERGE a FILE FINGERPRINT in Symantec Endpoint PRotection Manager (SEPM) ?
https://www-secure.symantec.com/connect/articles/it-possible-edit-append-or-merge-file-fingerprint-symantec-endpoint-protection-manager-sepm

What is SYSTEM LOCKDOWN ? What Stages do I Implement SYSTEM LOCKDOWN in in Symantec Endpoint Protection (SEP) ?
https://www.symantec.com/connect/articles/what-system-lockdown-what-stages-do-i-implement-system-lockdown-symantec-endpoint-protectio

From here, writing this article for a specific use case on System Lockdown. There are various challenges like patch management, remote deployment and support which arise when supporting a system with full system lockdown enabled. Hence, would like to bring up the point of using a deployment to manage, provision & deploy a system which has System Lockdown enabled.

I propose the following strategy for Windows Updates in an Environment with System Lockdown Implemented. Considering most of System Lockdown implementation is already completed here and all we need to do is incorporate working with a deployment tool like LANDesk, SCCM, Tivoli, etc.

  1. Create a Test Group in SEP Manager (might want to call it as Deployment Target Testing or something)
  2. Stop Policy  Inhertance for the Group
  3. Change the System Lockdown Mode to LOG ONLY
  4. Add the Test/Pilot machine(s) to the group
  5. If your deployment tool requires a an agent, push the agent & Reboot if necessary
  6. Push all the Approved Software Pacakages to this system (which might require multiple reboots)
  7. Monitor the Control Log
  8. Gather Checksum for the identified UNAPPROVED applications in the Control Log
  9. Merge/Append the same in the SEP Manager MASTER FILE FINGERPRINT Policy

DLP mail prevent performance

$
0
0

Usually first question for DLP customers who want to used DLP mail prevent is “What will be impact on mail delivery?” Yes, DLP will introduce a latency in mail delivery but I will try to show you with below tests that it will not be noticeable for most end users.

Testing system

All test were performed on a Windows 2012 multi-tier environment and using SYMANTEC DLP v14.5MP1 solution. Mail were generated with a homemade system which has multithreaded mail generator in order to simulate set of mail servers and a “latency-meter” which will receive email after DLP analysis and it will compute global latency introduce by DLP.

System was configured to get a smooth email traffic of 10k email generated in over 40 minutes. Different policies were active on mail prevent using most of DLP detection techniques (DCM, IDM, EDM).

Results

Graphics below shows that traffic was quite flat and around 4 email/s. A single DLP mail prevent server is able to process this traffic without specific issue.

trafic.png

Graph below shows latency measured for all messages generated by the system. For most emails latency is lower than 1s.

latency.png

Higher latency is observed for email with few Mb attachment size. But even there, most of them are processed in less than 5 seconds.

size-latency.png

Apart of latency measurement, we have also checked server resources usage when traffic becomes higher. We observed that CPU is most impacted resource when traffic reach high level (during our test we have reached over 40 email per seconds for our single mail prevent server). Of course, network could also become a bottleneck if traffic became higher than available bandwidth. We did not reach limit in memory usage with our system, but as all software if you reach it you may start to use virtual memory on disk and performance will decrease.

All these tests may not be exactly in same configuration as your environment but it shows that mail prevent server will not induce a latency in your messaging system which could be noticeable by end users.

Viewing all 805 articles
Browse latest View live