October 14, 2016

In the past, I had found myself needing to do an audit on some network shares for any files that might contain passwords. Instead of doing manual searches on these shares, I decided that it might be more beneficial to create a PowerShell script that would provide a more automated approach. What I needed was a script that could be easily ran against various network share locations, searching for a particular string (e.g. "password"). Any file that was found to have this string within either its filename or the contents of the file would then have its full path added to the chosen output text file. For my audit, I was not interested in the passwords themselves, but more concerned with the existence of such password files.

For this particular script, we will be configuring it to accept input directly from the user while executing it via a command-line. By configuring these parameters, found in the following code snippet, you will be able to pass the Directory that you want to run a recursive search under, the text that you wish to search for, and the output file you wish to save your results to directly into the PowerShell script. This is extremely useful for scripts that you may run often, and with different parameters, as you will not have to directly modify the source code each time you wish to run it.

[CmdletBinding()]
Param (
    [Parameter(Mandatory=$True,Position=0)]
    [string]$Directory,

    [Parameter(Mandatory=$True,Position=1)]
    [string]$Text,

    [Parameter(Position=2)]
    [string]$Output
)

The next part of this script will be to define what file extensions we wish to perform our text search against into an Array. Feel free to add any additional file extensions that you might want to try and search against, but know that not all extensions work well with this method of searching.

$Extensions=@("TXT","XLS","CSV","DOC","RTF")

Now that we have the extensions that we wish to search for declared in an Array, with will need to loop through each of these extensions, prepending "*." so that we can search for any possible file with that particular file extension. The beginning of this loop will look like so.

ForEach ($Extension in $Extensions) {
    $File = "*." + $Extension

The next part of this loop will be to recursively get a list of all of the files located within the directory given as part of this script's parameters. The results of this code snippet will need to be piped over to the next step.
Get-ChildItem $Directory -Filter $File -Recurse -Force |

The previously obtained list of files will be piped into another loop. For this, we will utilize the ForEach-Object cmdlet, looping through each file. For ease of use, the beginning of this loop will create a progress bar that will show what file is currently being scanned. This will be most useful whenever you are running this script against a directory with numerous files and sub-directories.

Write-Progress -Activity "Scanning Files..." `
$_

The first search we will perform against the current file (via the loop) is to determine if the specified search string is contained within the contents of said file. If so, then the current file's path and filename will be added to the output file (specified in the script's parameters).

If (Get-Content $_.FullName | Select-String -Pattern $Text)
{
        Add-Content -path $Output -value $_.FullName
}

Next, we will scan the current file's filename and determine if the specified search string resides within it. For example, if you are running this script to search for the string "password," it will not only provide you with files that contain the string "password" within their contents, but also any files with "password" in their filename (e.g. MySecretPasswords.txt).

$WildCardText = "*" + $Text + "*"
If ($_.FullName -like $WildCardText)
{
       Add-Content -path $Output -value $_.FullName
}

When this script is all together, it will be the following.

#SearchForString $Directory $Text $Output
#Example:
#SearchForString "\\FileServer\Accounting" "password" "C:\PasswordFiles.txt"


#Define Parameters
[CmdletBinding()]
Param (
    [Parameter(Mandatory=$True,Position=0)]
    [string]$Directory,

    [Parameter(Mandatory=$True,Position=1)]
    [string]$Text,

    [Parameter(Position=2)]
    [string]$Output
)


#Define File Extensions to Audit
$Extensions=@("TXT","XLS","CSV","DOC","RTF")


#Loop through each Extension in our $Extensions Array
ForEach ($Extension in $Extensions) {
    $File = "*." + $Extension

    #Get all files in $Directory
    Get-ChildItem $Directory -Filter $File -Recurse -Force |
            
            #Loop through all files
            ForEach-Object {
                #Write our scan's progress to the progress bar (what file we're currently on)
                Write-Progress -Activity "Scanning Files..." `
                $_

                #If content of file contains $Text, write to $Output file
                If (Get-Content $_.FullName | Select-String -Pattern $Text)
                {
                Add-Content -path $Output -value $_.FullName
                }

                #If filename contains $Text, write to $Output file as well
                $WildCardText = "*" + $Text + "*"
                If ($_.FullName -like $WildCardText)
                {
                    Add-Content -path $Output -value $_.FullName
                }
            }
}

Running this script is very simple, as we have specified parameters which we can utilize in order to avoid having to modify the original PowerShell script each time we wish to run it. The following screenshot shows an example of what the command for running this script might look like.

Example of script execution.

As the script is being executed, you should see a progress bar that will display what file is currently being scanned.

The script's progress bar in action.

Once the script has finished, you should have all of your results stored within the specified output file. The contents of which might look like the following.

The results of your scan.

September 30, 2016

In the previous post, IT Admin Tips: Create Personal Folders for All Active Directory Users With PowerShell, we went through the steps to create a PowerShell script that would go through your list of Domain Accounts and create a Personal Folder for each active account. This is very useful when you are first implementing Personal Folders, but what about creating Personal Folders for newly created Domain Accounts? Wouldn't it be helpful to have something in-place that automatically creates a properly configured Personal Folder for newly created Domain Accounts within minutes of the account being created? In this post, that is precisely what we will go through creating.

Just like in the previous post, we will begin by importing the ActiveDirectory module for use and setting a variable to be the UNC path for the shared folder.

import-module ActiveDirectory

$Directory = "\\UNCPATHTOSHAREDFOLDER"

Next, we will be doing something similar to what we did within the post IT Admin Tips: Creating AD User Account Alerts, with regards to the "Account Created Alert" that was created. Just like in that post, we will begin by storing the contents of the most recent Event ID 4720, which is generated whenever a new Domain Account is created, into a variable for use.

$Event = Get-EventLog -LogName Security -InstanceId 4720 -Newest 1

Next, we will need to parse out the Domain Account's username from the Event ID. This can be accomplished via the following code.

[String]$String = $Event.ReplacementStrings
$UserName = ($String).split()[0]

Now that we have the username for the newly created user, we can obtain information on this account from Active Directory, which we will use when creating the Personal Folder.

$ADUserData = Get-ADUser $UserName
$Name = $ADUserData.Name
$UserID = $ADUserData.SamAccountName[String]$String = $Event.ReplacementStrings
$UserName = ($String).split()[0]

Just like with the previous post, we will now create a Personal Folder for this newly created Domain Account, along with assigning the Full Control permission to this folder for the account.

New-Item -type directory -Path "$Directory\$Name"

$ACL = Get-Acl "$Directory\$Name"
$AccessRule = New-Object  system.security.accesscontrol.filesystemaccessrule($UserID,"FullControl","ContainerInherit,ObjectInherit","None","Allow")
$ACL.SetAccessRule($AccessRule)
Set-Acl "$Directory\$Name." $ACL

When this code is put all together, we end up with something like the following.

#Import the ActiveDirectory module in order to access AD via PowerShell
import-module ActiveDirectory

#The UNC path for the shared folder
$Directory = "\\UNCPATHTOSHAREDFOLDER"

#Get the contents of the most recent Event ID 4720 (generated when an account is created)
$Event = Get-EventLog -LogName Security -InstanceId 4720 -Newest 1

#Parse out the UserName
[String]$String = $Event.ReplacementStrings
$UserName = ($String).split()[0]

#Get User's Active Directory Object Data
$ADUserData = Get-ADUser $UserName
$Name = $ADUserData.Name
$UserID = $ADUserData.SamAccountName

#Create the new user's Personal Folder
New-Item -type directory -Path "$Directory\$Name"

#Give the user Full Control permissions to their Personal Folder
$ACL = Get-Acl "$Directory\$Name"
$AccessRule = New-Object  system.security.accesscontrol.filesystemaccessrule($UserID,"FullControl","ContainerInherit,ObjectInherit","None","Allow")
$ACL.SetAccessRule($AccessRule)
Set-Acl "$Directory\$Name." $ACL

Once this script has been created, you can schedule it within Task Scheduler on your Active Directory server using a Domain Account with the appropriate access. Ideally, this account would be some sort of "service" account and not associated with a particular IT Admin. In order for the new Domain Account's Personal Folder to be created immediately, you will need to configure the task to be triggered whenever Security Event ID 4720 occurs.

September 15, 2016

Recently, I created a PowerShell script to help out with automatically creating a large number of folders, which will be used as our employee "Personal" folders. If you're not sure what a Personal folder is, it is a shared directory used to store any documents or files that are relevant to the work that an employee is doing. These Personal folders are meant to be used alongside "Departmental" shared folders, which are where the documents and files that everyone within a department/area will need access to are stored. In the case of Personal folders, they are meant to hold just the documents and files that the one employee may be working with.

Without using Personal folders, most employees will store the majority of their files on their Desktop or Documents folders on their workstation, only saving a handful of documents to shared departmental folders and the like. This can be a problem in situations where the employee's workstation has a failing hard drive, there's a malware infection, etc. as there are likely no backups of their documents. Personal folders, on the other hand, are typically shared off of a corporate file server that has some sort of backup solution in-place to protect corporate data.

For my particular situation, which spawned the creation of this PowerShell script, I needed to created a Personal folder for every active user account setup within Active Directory. Once created, the correct permissions needed to be applied to the folder so that only the corresponding employee would have access to it. This is a pretty simple task, and is a perfect candidate to make a script for.

Let's get started.

The first thing we will need to do is to import the Active Directory module for use. This module is essential in order for this script to obtain a list of all active accounts within Active Directory.

import-module ActiveDirectory

Next, let's just go ahead and store the UNC path to the shared folder where these Personal folders will be created into a variable for later use.

$Directory = "\\UNCPATHTOSHAREDFOLDER"

Now we can use the Get-ADUser cmdlet, provided to use via the ActiveDirectory module we imported, to get a list of all active accounts within Active Directory. This snippet of code will look like the following.

Get-ADUser -LDAPFilter "(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2))" -Properties Name,SamAccountName

Since we don't just want the list of accounts, but also want to create a Personal folder for each account, we will need to create a loop. This can easily be done by piping the previous snippet of code into a ForEach-Object loop. Inside of this loop, we will need to first create the Personal folder for the user account, which will look something like the following. Keep in mind that, with how this code is written, the folder's name will be something like "John Smith," even if their account name is "John.Smith."

New-Item -type directory -Path "$Directory\$Name"

Lastly, we will need to assign the Full Control permission to this folder for the account itself. The following code will accomplish this.

$ACL = Get-Acl "$Directory\$Name"
$User = $_.SamAccountName
$AccessRule = New-Object  system.security.accesscontrol.filesystemaccessrule($User,"FullControl","ContainerInherit,ObjectInherit","None","Allow")
$ACL.SetAccessRule($AccessRule)
Set-Acl "$Directory\$Name." $ACL

Combining all of this code together, with some minor tweaks and code comments, should give us the following script.

#Import the ActiveDirectory module in order to Access AD via PowerShell
import-module ActiveDirectory


#The UNC path for the shared folder
$Directory = "\\UNCPATHTOSHAREDFOLDER"


#Get all active AD Users and loop through the usernames
Get-ADUser -LDAPFilter "(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2))" -Properties Name,SamAccountName | ForEach-Object {


    $Name = $_.Name


    #Create Directory for each user
    New-Item -type directory -Path "$Directory\$Name"

    #Begin creating folder permissions
    $ACL = Get-Acl "$Directory\$Name"
    $User = $_.SamAccountName
    $AccessRule = New-Object system.security.accesscontrol.filesystemaccessrule($User,"FullControl","ContainerInherit,ObjectInherit","None","Allow")
    $ACL.SetAccessRule($AccessRule)
    Set-Acl "$Directory\$Name." $ACL
}

Now you can run your code, which should result in a Personal folder being created for each active Active Directory account. Each of these should have the appropriate permissions applied, which is to give only the corresponding account Full Control over the folder.

March 3, 2016

If you find yourself with the responsibility of managing an Exchange Server 2010 environment, then you will be all too familiar with using the Exchange Management Console. This utility is installed as a part of your Exchange server setup, and provides a graphical interface for managing your entire Exchange environment. By default, this management utility is only accessible from within your Exchange server itself. Fortunately, it is fairly simple to install both the Exchange Management Console and Exchange Management Shell (for PowerShell) on your own workstation. Prior to this, however, there are a few prerequisites that will need to be taken care of.

Microsoft's Exchange Management Console is used to manage your entire Exchange environment.

The first requirement is to download and install the Remote Server Administration Tools, provided by Microsoft. This should be a straightforward installation, but may require a reboot. This particular toolkit is very useful for any Windows server administrator, which will be discussed further in a future post. For now, it is necessary for the task of installing the Exchange management tools.

Once Microsoft's Remote Server Administration Tools has been installed, and your workstation rebooted, you will need to install some IIS components. In order to install these components, you will need to launch Control Panel and select Programs. From here, select Turn Windows features on or off, located under the Programs and Features heading. Within this screen, select IIS 6 Management Console, which should automatically select IIS Metabase and IIS 6 configuration compatibility as well. The following screenshot shows these two components that are required to be installed.


Now you will need to download Microsoft Exchange Server 2010 Service Pack 3. Since this is a large installation file, it will likely take some time to download. Once it is completely downloaded, however, you can move forward with running the setup file. This should begin the extraction process of the installation files for Exchange Server 2010 SP3, and will prompt you for a location to extract the files to. These installation files can be extracted to any location of your choosing.


After the file extraction is complete, you can navigate to the specified directory and run the Exchange Setup file located there. On the initial splash screen for the Exchange installation, you will see prerequisites listed under the Install heading. If you are running a newer Operating System like Windows 10, then you should have both .NET Framework 3.5 SP1 and PowerShell v2 already installed on your workstation. If not, then you will need to do so now.

To begin the installation, click on Step 4: Install Microsoft Exchange.


On the Introduction screen, you can click Next to move on to the License Agreement. From here, select that you accept the agreement and click Next to continue. You will now have the option to opt-in to Exchange Error Reporting, which will automatically send error reports to Microsoft. This is entirely up to you whether you would like to participate or not, but once you have made your decision click Next.

You should now be on the installation screen which allows you to specify whether you will be doing a Typical install or Custom. This is the most important step when installing only the Exchange Server 2010 Management Console on your workstation. In order to do so, select Custom Exchange Server Installation and click Next. I would also recommend that you do not select to automatically install Windows Server roles and features.


You will now be able to select which Exchange server roles that you wish to install. In our case, you will only want to select Management Tools. Click Next once you have made your selection.


Setup should now go through its Readiness Checks. In the event that you have a missing requirement, you will be notified within this screen. If there are any issues, you will need to correct them before retrying. If everything goes well, however, then you should see the following.


You can now click Install to begin the actual installation of the Exchange Management Tools. This might take a few minutes to complete, but you will be given a summary of the progress of the installation. Once everything is completely installed, you can click Finish to launch the Exchange Management Console.


If you have installed this tool on a domain computer, it should automatically connect to your Exchange server. You will be able to verify this by expanding out the menu tree items on the left. In the event that the version of your Exchange Management Console does not match that of your Exchange Server, you will need to download and install the necessary Service Packs on your workstation. At this point, you should have both the Exchange Management Console and Exchange Management Shell installed on your workstation. You will no longer need to remotely access your Exchange server in order to manage your Exchange environment.

February 18, 2016

Windows Management Instrumentation (WMI) Filters give you the ability to create Group Policy Objects (GPOs) that have a dynamically determined scope based upon the target system's attributes. This can be extremely useful whenever you want to apply a policy to specific systems that share a common attribute, such as Operating System or Model type. Without using WMI Filters, you would likely need to add these systems to a Domain Group which the GPO is applied to, or manually add them into the GPO itself. This can save you precious time, and headaches, once you fully grasp its potential.

For this particular example, we will design a WMI Filter that will deploy a GPO to all Domain Computers, except for those that are Windows 8 / Server 2012 or newer. Since we will be using WMI queries, we will need to reference the Operating System Versions that Microsoft uses in order to ensure that the correct ones are included within our query. For this, I referenced the Operating System Version MSDN page. For easier reference, I have copied the OS Version table below.

Operating System
Version Number
Windows 10 10.0*
Windows Server 2016 Technical Preview 10.0*
Windows 8.1 6.3*
Windows Server 2012 R2 6.3*
Windows 8 6.2
Windows Server 2012 6.2
Windows 7 6.1
Windows Server 2008 R2 6.1
Windows Server 2008 6.0
Windows Vista 6.0
Windows Server 2003 R2 5.2
Windows Server 2003 5.2
Windows XP 64-Bit Edition 5.2
Windows XP 5.1
Windows 2000 5.0

Based off of this information, we can see that we will need to query for Operating Systems that are lower than, but not including, Version 6.2. Unfortunately, WMI does not treat these Version numbers as numerical values, but instead handles them as string values. This means that Version "10.0*" will be viewed as the string value of "1." That being the case, we cannot just do something along the lines of "Version < 6.2" for what we are wanting, as Version "10.0*" will be viewed as the string value of "1," which would be lower than "6.2." In order to build a WMI query that would work for our requirements, while working within WMI's number/string limitation, we can do something that would logically look like the following.

"Version < 6.2 & Version != 10*"

Now that we have a general idea of how the query used for the WMI Filter will be setup, we can move forward with actually creating it. In order to get started with creating this, you will need to launch the Group Policy Management MMC on your Active Directory server. From here, right-click on WMI Filters (near the bottom) and select "New" in order to start creating your filter.


Once you have a Name and Description for the WMI Filter, you can click "Add" in order to designate the WMI query that will be used for it. Using the logical query we came up with previously, we can convert it over to the syntax used by WMI. If you are familiar with using the WMIC command within Windows, then this might look a bit familiar. What we end up with is the following:

SELECT Version FROM Win32_OperatingSystem WHERE Version < '6.2%' AND NOT Version like "10%"

What this query does is check the Version information located under the Win32_OperatingSystem object, and selects anything that has a Version less than 6.2 and not 10. Within the query we also utilize the % symbol as a wildcard for the Version information. Once this has been entered within the WMI Filter, you should end up with something that looks like the following.


At this point, you can go ahead and save the WMI Filter that you have created. It should now show up under the WMI Filters item within the Group Policy Management utility.


With the WMI Filter having been created, you can now assign it to the corresponding Group Policy Object. In order to do this, you will need to select the GPO that you want to apply this WMI Filter to, and from there reference the Scope tab. Near the bottom should be a drop-down for WMI Filtering. Click the drop-down and select the filter that you have created.


You should now have your GPO successfully applying to system's based off of the WMI Filter. In this case, your GPO will be applying to all Domain Computers, except for those that are Windows 8 / Server 2012 or newer. In order to begin testing this, you will need to run a GPUpdate on your machines. This can easily be done via the following command, which may require the currently logged-in user to log off in order to take effect.

gpupdate /force

In order to test if the GPO is applying as expected, you should run this command on a system that it should be applied to (e.g. Windows 7) and on one that it should not be applied to (e.g. Windows 10). Once you have run the GPUpdate command on these systems, you can verify the GPO assignments by using the command GPResult. Since this command tends to generate quite a large output, it is sometimes best to pipe the output to a text file. This can easily be done via the following command, which will create a text file containing the results on the root of the system's C drive.

gpresult /v >> c:\GPResult.txt

If everything is applying correctly, you should see the following on your system that falls within the WMI Filter and therefore is to be excluded from the GPO. On your other system, however, you should see that the GPO is applying successfully.


Having followed these steps, you should now have a basic understanding of setting up WMI Filters with which you can apply Group Policy Objects to systems based off of their attributes. There are many different possibilities available for deploying your GPOs other than just by Operating System Versions. For example, you can deploy GPOs based off of whether the system is a Virtual Machine or not, the time zone that the system is within, the name of the system (e.g. system names beginning with "Finance"), and many more. No matter what type of dynamic filtering you want to take advantage of, they are all setup relatively the same way. In the event that you decide to try to build out your own WMI query, there is a fairly decent tool called the WMI Filter Validation Utility that is provided as freeware by SDM Software. I tested this utility out during the setup of my WMI Filter, and it seems to do a good job of helping you test out how your filter will function.

January 29, 2016

At some point during your SAP implementation, you will need to look at creating Security Roles for your end users. For the sake of time, I will go ahead and assume that you are at least familiar with SAP Authorization Concepts. If, however, you are unfamiliar with User Master Records, Authorization Objects, or running Transaction Codes such as PFCG, SU24, etc. then you may want to brush up on that first. SAP actually offers some courses on this information, which I have take a part of and found very informative. I would recommend that you at least take the three-day ADM940 AS ABAP - Authorization Concept course.

Designing, or even re-designing, your SAP Security Roles can be a daunting task. Having gone through this myself, and experiencing a handful of SAP module "Go-Lives," I have come up with what I believe to be a successful plan for tackling this particular project.

I. Determine the Business Functions Currently Performed
For each SAP module that you will be designing Security Roles for, you will need to determine what business functions are currently being performed by your business's employees. Some examples of common business functions are as follows:
     - Performing check runs
     - Creating new vendors
     - Issuing payment to vendors
     - Creating new SAP users

Keep in mind that this task will require more than just yourself to accomplish. You will need support from business personnel within in function area (Accounts Payable, Human Resources, Materials Management, etc.), along with the help of your business analysts. You should also keep this step very high level, avoiding the pitfalls of trying to build anything within SAP as of yet. Right now, your focus should be only to map out the functions that are utilized by your business. This step, being one of the most critical in this project, can take the longest.

II. Determine how each Business Function will be Restricted
Having mapped out the business functions that are currently being performed, you should now begin looking at how each function will be restricted. As with the previous step, this one will involve the support of key business personnel and your business analysts. In order to best perform this particular step, you should have an understanding of SAP Organizational Levels, such as Company Codes, Profit Centers, Warehouse Numbers, etc. as this is how you will be restricting access for each business function. For example, it may be a business requirement that employees within Company A are not be able to view financial data within Company B, or perhaps warehouse employees at Warehouse X should not be able to modify the inventory of Warehouse Y. These particular restrictions to your business functions are done via Organizational Levels.

III. Determine the Transaction Codes that are Currently Being Used (Optional)
This step is only needed if you are in the process of re-designing your existing SAP Security Roles, and can be skipped over if this is a completely new setup. The need for this step, in a re-design project, is due to the complicated nature of SAP. A re-design project can be quite strenuous during live Production on your SAP systems, and you should try and reduce the risk of leaving out critical access that your employees require. What I like to do is to generate a list using either SAP's Workload Monitor (Transaction Code ST03N) or by taking advantage of SAP GRC's Action Usage reports. With this list, I know for certain what Transaction Codes are being ran by all employees, and I can best ensure that none are overlooked during the Security Role creation step.

IV. Determine what Transaction Codes Comprise each Business Function
Now you can utilize what you determined in Step 1 by mapping out what Transaction Codes go with what business functions. In order to accomplish this step, you will need to work directly with your business analysts as they should have the SAP knowledge that is needed. If you happened to perform the previously listed step, then you can reference the list of Transaction Codes that you determined are currently being used within your system. Regardless, you will need to take the time to work with your business analysts in order to ensure that every required Transaction Code for each business function is mapped to it.

V. Create Security Roles
Using the information that was determined from the previous step, you should now be able to begin creating your Security Roles. You will first need to create Master Roles for each business function, assigning the proper Transaction Codes to each of them. Once these Master Roles have been created, you can then begin creating Derived Roles from each of them, based off of your findings from Step 2. Keep in mind that there will be no user assignment of the Master Roles, as these will only be used as a "shell" from which you will create the Derived Roles. These Derived Roles will be restricted via Organizational Levels, allowing users to only have access to specific business areas for the corresponding job function. For example, you may create a Master Role for financial reporting, and then create Derived Roles for each of your corporate offices. Assigning the Derived Role for Office A to a user only allows them to see financial data corresponding to Office A, and not for any of the other corporate offices. This allows you to better ensure that employees do not end up with access to data that they, otherwise, would not be privy to.

VI. Create Test UserIDs
With all of your Security Roles created, you can now import them into your testing environment. At this time, you can move forward with creating test userIDs. How you setup your test userIDs may vary, as it is dependent on how much time and resources you can have dedicated to the testing process. For example, you may want to have a test userID for each business function, so that you can ensure that the function works correctly from start to finish (e.g. a "Check Run" test userID). Then again, this may be unfeasible if you have far too many business functions. If that is the case, then you may take the approach of creating test userIDs for key business personnel. if you choose the latter approach, then you will need to work with the business in order to determine what business functions, along with what restrictions, that each of these key personnel will need access to.

VII. Test Security Roles
Using the test UserIDs that were created in the previous step, you will now be able to work with the personnel who were chosen to perform testing of your Security Roles. This can be a time-consuming process, as it will involve them testing business functions from start to finish, all the while providing you with screenshots of any errors they may run into (via Transaction Code SU53). You will then have to review these errors and make the necessary changes, keeping documentation of your changes. Typically, this step will take the longest to perform, but it is critical that it is completed thoroughly.

VIII. Determine Required Business Functions for Common Job Role
With the support of key business personnel, you will now need to take a look at each common job role within your business (e.g. AP Clerk, Warehouse Manager, Business Analyst, etc.) and determine what business functions (Refer to Step 1) that they would need access to. For example, maybe all of your AP Clerks need the ability to perform AP Payments, Process Vendor Invoices, etc. Perhaps your business's Warehouse Clerks all need access to perform Goods Movements. The key part to remember during this step is that you are documenting only the business functions that everyone with that job title require. If, for example, you have one AP Clerk who requires additional access, then you would not want to give that particular access to every AP Clerk.

IX. Create Composite Roles
Utilizing the information obtained in the previous step, you can now move forward with creating Composite Roles within your SAP system. For this step, you will be creating a unique Composite Role for each common job role (e.g. Office A - AP Clerk, Warehouse B - Warehouse Clerk, and so on), containing the corresponding Security Roles based off of the determined Business Functions. For example, you may end up creating a Composite Role for AP Clerks located at Office A that would contain access to perform the necessary business functions, with this access being restricted only to Office A's information via the role restrictions you previously determined. By doing this, you can easily ensure that everyone who shares the same job role, and therefore the same job duties, will have the exact same access as one another. This becomes extremely useful whenever you are dealing with new users who need to be setup within your SAP system, as you can quickly assign them the Composite Role that corresponds to their job role. Using Composite Roles also makes it easier to add and remove access if the requirements for each job role change within the business.

X. Perform Risk Analysis on Security Roles
Typically, this step is driven by an industry standard for compliance, such as Sarbanes-Oxley, but it should be considered even if your business isn't required to adhere to any regulatory compliance standard. If you have SAP's Governance, Risk, and Compliance solution setup within your environment, then this step can be easily performed by utilizing the built-in User Level Access Risk Analysis in order to determine if any of your Security Roles and/or employees will have any Segregation-of-Duties (SoD) violations within their access. If, however, you do not have access to GRC, then you will need to spend some time working with your business analysts and key personnel within your business in order to analyze the access that your SAP users will have, based upon the Security Roles you have created. A key thing to remember during this step is that since you have created your Security Roles based on common business functions, you can focus your discussions around the risks associated with those business functions. This prevents discussions from becoming "too technical," yet still allows you, as the SAP Security Administrator, to obtain the information you will need in order to make any necessary Security Role changes.

XI. Resolve Risks and/or Ensure that Mitigating Controls are In-Place
For any SoD risks that were discovered within the previous step, you should work with key personnel within your business, including members of management, to resolve them. Typically, this will involve changing what business functions that particular employee is tasked with handling, so that they no longer have conflicting access. In the event that a risk cannot be resolved this way, possibly due to department size, you will need to ensure that a proper Mitigating Control is in-place. As for creating the Mitigating Control, this is something that will heavily involve the assistance of key business personnel, including certain members of management, as it will be their responsibility to maintain adherence to the details of the Control. Keep in mind that a Mitigating Control does not prevent a risk from being carried out, but instead it shows your business if one has occurred within your system.

XII. Promote Security Roles to Production
At this point, your Security Roles should have been successfully tested within your test environment. Any issues that were ran into during testing should've been resolved, and re-tested for verification. Composite Roles based off of common job roles (e.g. AP Clerk) should exist, and have the correct corresponding Security Roles assigned to them. Any potential business risks have been discussed and resolved, whether by changing the user's access or implementing a Mitigating Control. If everything looks to be complete, you can now receive business sign-off to promote your Security Roles into your Production system. At this time, you can assign your system users with the access that it was determined they would have. If the purpose of this whole project was for you re-design existing Security Roles, then it might be a wise decision to migrate your system users over to your newly designed Security Roles over a period of time.

While I have had success in following this project plan in designing/re-designing SAP Security Roles, you should still try and determine what will work best for your business. Like the majority of SAP-related work, you should tailor this plan to fit your business's needs and requirements. As I've stated many times throughout this, you will need the help from many key individuals from within your business in order to successfully accomplish this project, and you will need to work with them to help take ownership of many of the tasks within it. This is not a project that should be taken lightly, and everyone involved in it should understand that it will take quite some time to fully accomplish, with most of your time spent in the planning stages.

Hopefully these steps will be helpful if you ever find yourself responsible with designing or re-designing SAP Security Roles within your SAP environment. Keep an eye out for future posts where I dive deeper into some of the more technical parts of this project, including the use of a Security Matrix document and many other steps.
Subscribe to RSS Feed Follow me on Twitter!