TOGAF 9 Certified Exam Breakdown

In the last post I hadn’t found the breakdown of questions for the exam. The Open Group does publish the TOGAF 9 Certified Study Guide (link goes to the The Open Group’s .PDF offering at their shop, but you can also find it on Amazon.com in various formats). Inside that source are both the 13 units or learning objectives as well as the question breakdown.

First, the objectives. Someone who has achieved TOGAF 9 Foundation status should understand:

  • the basic concepts of Enterprise Architecture as well as the TOGAF framework
  • the core concepts of the TOGAF standard
  • the vocabulary of the TOGAF standard
  • The Architecture Development Method (ADM) cycle, the purpose of each phase, and how to adapt the ADM for their organization
  • the concept of the Enterprise Continuum, its purpose, and what the individual parts
  • how each of the ADM phases contributes to the success of Enterprise Architecture
  • the ADM guidelines and techniques
  • how Architecture governance contributes to the Architecture Development Cycle
  • the concepts of architecture views and viewpoints and how they help communicate with stakeholders
  • the concept of building blocks — there are two types, architectural and solution
  • the key deliverables of the ADM cycle
  • the two example TOGAF reference models
  • the TOGAF certification program

The question breakdown, according to the study guide is this:

Topic# of Questions
Basic Concepts3
Core Concepts3
The ADM3
The Enterprise Continuum4
ADM Phases9
ADM Guidelines and Techniques6
Architecture Governance4
Architecture Views, Viewpoints, and Stakeholders2
Building Blocks2
ADM Deliverables2
TOGAF Reference Models2

Note that the vocabulary (the study guide uses the phrase “key terminology” is not separately tested nor are the details of the TOGAF certification program. However, it is common sense to expect the vocabulary to be in the questions themselves.

Understanding the TOGAF 9 Foundation Exam

When I teach the ISACA Certified Information Systems Auditor (CISA) course, one of the things I walk candidates through is how the test is structured and how much each domain is weighted for the overall exam. By understanding the weighting and comparing how you’re doing based on that, you should have a reasonable idea of whether or not you can pass the test. As time gets short, if you focus on the areas that have more weight, you increase the likelihood of passing the test. For the CISA, this type of strategy is absolutely necessary because the CISA is “a mile wide and an inch deep.” Other tests like this include ISC2‘s CISSP. The TOGAF 9 standard is quite large and even though the foundation exam narrows the focus, it’s still a test you have to strategize for. So with that said, let’s look at the testing parameters.

The TOGAF 9 Foundation exam is a 40 question, multiple choice exam. It is closed book and one has 60 minutes to complete the exam. A passing score is 55% (or 22 questions answered correctly out of the 40). It’s a bit on the expensive side ($360 in the US at the time of this post) and if you fail you have to wait a month before you can take it again.

There are 13 units that are applicable for the TOGAF 9 Foundation exam. I’ve seen a couple of places that assigns weights to the units, however, I’ve not seen any official documentation as to how many questions are asked for each unit. As a result, by taking a practice exam I can determine what units I’m weakest at, but there’s no “gaming” the test like one could for the CISA based on weight of each domain. That’s important to know especially given the one month waiting period if I should fail the first time.

Speaking of the 13 units, Unit #13 is the TOGAF certification program which has the following learning outcome: “Explain the TOGAF Certification program, and distinguish between the levels for certification.” This post and the previous one cover unit 13.

Preparing for TOGAF Certification

The Open Group Architecture Framework (TOGAF) is one of the most well-known architectural frameworks in our industry. TOGAF is a framework that helps organizations implement enterprise architecture as a discipline, which has a myriad of goals I will go into in a later post. Like a lot of industry frameworks and technologies, there is a certification process to show others that you know and understand TOGAF. With TOGAF, there are two levels of certification:

  1. TOGAF 9 Foundation
  2. TOGAF 9 Certified

There is a set of certifications that have come out along with TOGAF 10, but those are more along the lines of learning paths and TOGAF 9 certification is what’s known and the knowledge base behind TOGAF 9 is the essence of good enterprise architecture. As a result, I’m looking to go forward and get TOGAF 9 Certified.

It is possible to take a combined exam, and that would save some money and time, but I like how things are broken out for the two levels, so my initial focus is the Foundation exam.

So why tell folks that’s what I’m going for? Well, it’s more along the lines that as I started looking for resources for TOGAF 9 certification, I didn’t find a lot. The Open Group has their study guides and a lot of courses and training touch about TOGAF, but I just didn’t find a lot in free community resources that were specifically identified as TOGAF. An easy way to solve that is to blog about my own studies. Not only does that help ME study, but it provides some additional resources for those who may want to take the exams themselves.

Now, this isn’t my brilliant idea. I’m reusing it (a core concept in enterprise architecture and TOGAF, BTW). My building block is Kenneth Fisher’s SQL Studies site. He has done an amazing job over a decade of posts. If you’re not familiar with it or him, please check out his work.

Digital Trust Workshop Materials

Here are the Digital Trust Workshop syllabus and Day 1 slides (PDF).

Key Bibliography / Links:

Two Percent, Five Percent, More?

I was reading Seth Godin today and he makes this claim:

If 2% of a population takes coordinated action, it makes a difference. If 5% do, it can change everything.

https://seths.blog/2022/05/the-ones-who-didnt-help/

It’s not 2% or 5% as a claim without any context. Rather, if we know what percentage is required to make a difference, if we can increase that, even marginally compared to the overall population, we are better for it.

Applying this to technology, we aren’t going to be able to convince everyone. But if we can get some idea of what percentage it would take to cause a change for the better, we should focus on those most likely to adopt. A lot of changes in technology have started with an initially modest adoption. And while I believe the “shun” statement was tongue-in-cheek, I definitely agree with how Seth concluded his post:

Instead, we have the chance to find and connect and celebrate the people who care enough to make a difference.

Reading more efficiently

In IT, we have to read a lot. For instance, understanding how to set something in Azure or Okta or vSphere may mean we are consuming 5 or more “articles” to get the gist of what we need to do. The better we are at reading and extracting the information pertinent to the task, the faster and more accurately we can accomplish our work.

If you’re still pretty much a word-by-agonizing-word reader (with apologies to Jeff Moden and RBAR), the following may help you read quicker and retain the information better:

How to Read a Book a Week by Peter Bergman (Harvard Business Review)

Architecture – Commonality is an accelerator

In IT, commonality is an accelerator. When I say commonality, I mean commonality of:

  • Components, like libraries
  • Object models
  • Interface definitions
  • Tools

Let me give you an example. Imagine I have 3 different scrum teams, each supporting 3 different but interconnected applications. They each need to develop API calls for their system to talk to the other 2. The typical approach is for developers on one scrum team to have to learn the APIs of the other 2 systems. However, while I’m learning the APIs of those other 2 systems, I’m not working on my own system’s product backlog. Wouldn’t it be great if there was a standard API which all 3 systems implemented? Standardized methods with standardized parameters would mean each team could simple call to the standard. There wouldn’t be any wasted time having to learn the internals of a different system. That would be definitely accelerate work and deployment of features for each team.

Is there some additional work for each team? Yes. They’ll have to take their APIs and build an interface layer that maps to the standard. However, that does two things:

  • It ensures the folks who know the system the best are the ones building the interface layer, rather than a team supporting a different system trying to figure things out.
  • It also ensures each team is familiar with the standardized model, meaning their ability to implement interfaces with other systems should accelerate.

Think about if each team had to learn the other teams’s systems. We actually lose more time than just the learning. We also lose time due to the following:

  • Troubleshooting errors due to not having a full understanding of the other system. This is for both teams: the team learning and the team supporting.
  • The supporting team effectively has to mentor each learning team. This isn’t something that’s easy to put in the product backlog. So it usually shows up as untracked/unplanned work.

Commonality reduces the time spent on those two things. So overall, if I have a standardized API interface, if I have commonality on my systems, I should be able to build faster and ship faster.

Architecture – The Perfect Enemy

When I was younger, I always wanted the perfect solution to an IT problem. I remember getting into argument after argument trying to insist upon the perfect configuration, setup, or whatever it was to meet the need.

The problem with looking for the perfect solution is that there are a lot of considerations that aren’t taken into account. These things apply not only to the problem at hand, but broader. Let’s look at some of them:

  • The overall cost in money
  • The amount of human effort it will take to get it implemented.
  • The timeline it will take to implement.
  • The complexity it introduces to the environment.

One issue that we encounter when insisting on the perfect solution is we bypass the good enough solutions. And often times the work and cost difference between good enough and perfect is substantial. By focusing on perfect, we end up committing people whom can be used elsewhere far longer than a good enough solution. we end up spending more money. In short, we reduce overall what the organization can get accomplished.

When we focus on perfect vs. good enough, we also potentially violate lean and agile principles. Lean and agile would indicate we get a minimum solution together as quickly as possible for feedback, for testing, for learning information we didn’t know we needed. From that initial offering, we get the knowledge we need to improve. And by iterating quickly, we end up developing the solution the user needs, not the one we had in our heads.

And that points out another issue with perfect solutions. They are perfect based on what we know at that point in time. But we don’t know what we don’t know. What we may envision as a perfect solution may be far away from is the best solution. We just don’t have the information to see that. Therefore, it’s not good to focus on the perfect. It’s the perfect based on incomplete information. And the perfect is the perfect enemy to architecture.

Webinar on Data Security and Compliance

Tomorrow, July 13, 2021, I will be giving a webinar on data security and compliance. Here’s the sign-up link:

SQL Server Data Security and Compliance sign-up (free, but registration required)

Here’s what we’ll be covering:

Every enterprise organization must meet particular data security and compliance requirements such as GDPR, CCPA and HIPAA. With so many Microsoft SQL Server databases in our enterprise, we need an automated way to discover, scan and identify sensitive and personal information so that we know what data needs to be protected. In this webinar we’ll first consider the configuration settings that are tied to data security/compliance. Then we’ll start looking at how SQL Server performs data security, especially authorization and auditing, and what tools are available to us. We’ll also briefly cover data classification, data masking and data encryption, as those are part of most data security/compliance efforts as well.

Geek Sync: Meeting Security Benchmarks and Compliance with Microsoft SQL Server

Tomorrow, April 28, 2021, 12 PM EDT, I will be talking about meeting security benchmarks and compliance requirements with Microsoft SQL Server. Here is the registration link:

Geek Sync | Meeting Security Benchmarks and Compliance with Microsoft SQL Server

Here is what we’ll be talking about:

In today’s IT landscape, we are faced with meeting an ever increasing number of laws, regulations, and industry standards. The good news is that the majority of these different requirements overlap with each other and we can configure our SQL Servers accordingly. In this webinar we’ll take a look at those standard configuration settings you should be setting in your environments as well as what you’ll be auditing for in order to meet all of your compliance criteria. We will start with the recommended security “good” practices for managing identity and permissions. From there we will move on how to audit SQL Server, both with security changes and actual activity. There may be specific items you need to account for and we’ll walk though how to set those up within SQL Server. Finally, we’ll briefly discuss what it would take to deploy this across your entire environment, especially using the functionality provided out of the box by Microsoft and with tools like PowerShell.

Previous Older Entries