MuleSoft-Platform-Architect-I Exam Questions

Total 97 Questions


Last Updated On : 16-Jan-2025

True or False. We should always make sure that the APIs being designed and developed are self-servable even if it needs more man-day effort and resources.


A. FALSE


B. TRUE





B.
  TRUE

Explanation

Correct Answer: TRUE

*****************************************

As per MuleSoft proposed IT Operating Model, designing APIs and making sure that they are discoverable and self-servable is VERY VERY IMPORTANT and decides the success of an API and its application network.

What API policy would be LEAST LIKELY used when designing an Experience API that is intended to work with a consumer mobile phone or tablet application?


A. OAuth 2.0 access token enforcement


B. Client ID enforcement


C. JSON threat protection


D. IPwhitellst





D.
  IPwhitellst

Explanation

Correct Answer: IP whitelist

*****************************************

OAuth 2.0 access token and Client ID enforcement policies are VERY common to apply on Experience APIs as API consumers need to register and access the APIs using one of these mechanisms

>> JSON threat protection is also VERY common policy to apply on Experience APIs to prevent bad or suspicious payloads hitting the API implementations.

IP whitelisting policy is usually very common in Process and System APIs to only whitelist the IP range inside the local VPC. But also applied occassionally on some experience APIs where the End User/ API Consumers are FIXED.

When we know the API consumers upfront who are going to access certain Experience APIs, then we can request for static IPs from such consumers and whitelist them to prevent anyone else hitting the API.

However, the experience API given in the question/ scenario is intended to work with a consumer mobile phone or tablet application. Which means, there is no way we can know all possible IPs that are to be whitelisted as mobile phones and tablets can so many in number and any device in the city/state/country/globe.

So, It is very LEAST LIKELY to apply IP Whitelisting on such Experience APIs whose consumers are typically Mobile Phones or Tablets.

The application network is recomposable: it is built for change because it "bends but does not break"


A. TRUE


B. FALSE





A.
  TRUE

Explanation:

*****************************************

>> Application Network is a disposable architecture.

>> Which means, it can be altered without disturbing entire architecture and its components.

>> It bends as per requirements or design changes but does not break

Reference: [Reference: https://www.mulesoft.com/resources/api/what-is-an-application-network, ]

A company requires Mule applications deployed to CloudHub to be isolated between non-production and production environments. This is so Mule applications deployed to non-production environments can only access backend systems running in their customer-hosted non-production environment, and so Mule applications deployed to production environments can only access backend systems running in their customer-hosted production environment. How does MuleSoft recommend modifying Mule applications, configuring environments, or changing infrastructure to support this type of per-environment isolation between Mule applications and backend systems?


A. Modify properties of Mule applications deployed to the production Anypoint Platform environments to prevent access from non-production Mule applications


B. Configure firewall rules in the infrastructure inside each customer-hosted environment so that only IP addresses from the corresponding Anypoint Platform environments are allowed to communicate with corresponding backend systems


C. Create non-production and production environments in different Anypoint Platform business groups


D. Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments





D.
  Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments

Explanation

Correct Answer: Create separate Anypoint VPCs for non-production and production environments, then configure connections to the backend systems in the corresponding customer-hosted environments.

*****************************************

Creating different Business Groups does NOT make any difference w.r.t accessing the non-prod and prod customer-hosted environments. Still they will be accessing from both Business Groups unless process network restrictions are put in place.

We need to modify or couple the Mule Application Implementations with the environment. In fact, we should never implements application coupled with environments by binding them in the properties. Only basic things like endpoint URL etc should be bundled in properties but not environment level access restrictions.

IP addresses on CloudHub are dynamic until unless a special static addresses are assigned. So it is not possible to setup firewall rules in customer-hosted infrastrcture. More over, even if static IP addresses are assigned, there could be 100s of applications running on cloudhub and setting up rules for all of them would be a hectic task, non-maintainable and definitely got a good practice.

The best practice recommended by Mulesoft (In fact any cloud provider), is to have your Anypoint VPCs seperated for Prod and Non-Prod and perform the VPC peering or VPN tunneling for these Anypoint VPCs to respective Prod and Non-Prod customer-hosted environment networks.

Reference: [: https://docs.mulesoft.com/runtime-manager/virtual-private-cloud, , Bottom of Form, Top of Form, , ]

A company has created a successful enterprise data model (EDM). The company is committed to building an application network by adopting modern APIs as a core enabler of the company's IT operating model. At what API tiers (experience, process, system) should the company require reusing the EDM when designing modern API data models?


A. At the experience and process tiers


B. At the experience and system tiers


C. At the process and system tiers


D. At the experience, process, and system tiers





C.
  At the process and system tiers

Explanation

Correct Answer: At the process and system tiers

*****************************************

Experience Layer APIs are modeled and designed exclusively for the end user's experience. So, the data models of experience layer vary based on the nature and type of such API consumer. For example, Mobile consumers will need light-weight data models to transfer with ease on the wire, where as web-based consumers will need detailed data models to render most of the info on web pages, so on. So, enterprise data models fit for the purpose of canonical models but not of good use for experience APIs.

That is why, EDMs should be used extensively in process and system tiers but NOT in experience tier.

A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?


A. se a CloudHub autoscaling policy to add CloudHub workers


B. Use a CloudHub autoscaling policy to increase the size of the CloudHub worker


C. Increase the size of the CloudHub worker(s)


D. Increase the number of CloudHub workers





D.
  Increase the number of CloudHub workers

Explanation

Correct Answer: Increase the size of the CloudHub worker(s)

*****************************************

The key details that we can take out from the given scenario are:

API implementation uses a database bulk insert command to submit all the purchase data to a database

JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker

Sometimes a request fails and the logs show a message indicating an out-of-file-space message

Based on above details:

Both auto-scaling options does NOT help because we cannot set auto-scaling rules based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages and not due to some given error or disk space issues.

Increasing the number of CloudHub workers also does NOT help here because the reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to disk-space.

Moreover, the API is doing bulk insert to submit the received batch data. Which means, all data is handled by ONE worker only at a time. So, the disk space issue should be tackled on "per worker" basis. Having multiple workers does not help as the batch may still fail on any worker when disk is out of space on that particular worker.

Therefore, the right way to deal this issue and resolve this is to increase the vCore size of the worker so that a new worker with more disk space will be provisioned.

What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?


A. Redis distributed cache


B. java.util.WeakHashMap


C. Persistent Object Store


D. File-based storage





C.
  Persistent Object Store

Explanation:

Correct Answer: Persistent Object Store

*****************************************

Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform

File-storage is neither performant nor out-of-the-box solution in Anypoint Platform

java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub.

https://www.baeldung.com/java-weakhashmap

Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub.

https://docs.mulesoft.com/object-store/

So, Persistent Object Store is the right answer.

What is most likely NOT a characteristic of an integration test for a REST API implementation?


A. The test needs all source and/or target systems configured and accessible


B. The test runs immediately after the Mule application has been compiled and packaged


C. The test is triggered by an external HTTP request


D. The test prepares a known request payload and validates the response payload





B.
  The test runs immediately after the Mule application has been compiled and packaged

Explanation

Correct Answer: The test runs immediately after the Mule application has been compiled and packaged

*****************************************

Integration tests are the last layer of tests we need to add to be fully covered.

These tests actually run against Mule running with your full configuration in place and are tested from external source as they work in PROD.

These tests exercise the application as a whole with actual transports enabled. So, external systems are affected when these tests run.

So, these tests do NOT run immediately after the Mule application has been compiled and packaged.

FYI... Unit Tests are the one that run immediately after the Mule application has been compiled and packaged.

Reference: [Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies#integration-testing, ]

An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?


A. When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state


B. When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state


C. When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state


D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state





D.
  When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state

Explanation

Correct Answer: When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state.

*****************************************

Key details in the scenario:

Use the CloudHub Object Store via the Object Store connector

Considering above details:

CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.

We CANNOT use an application's CloudHub Object Store to be shared among multiple Mule applications running in different Regions or Business Groups or Customer-hosted Mule Runtimes by using Object Store connector.

If it is really necessary and very badly needed, then Anypoint Platform supports a way by allowing access to CloudHub Object Store of another application using Object Store REST API. But NOT using Object Store connector.

So, the only scenario where we can use the CloudHub Object Store via the Object Store connector to persist the cache’s state is when there is one CloudHub deployment of the API implementation to multiple CloudHub workers that must share the cache state.

A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios. What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?


A. Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry


B. Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers


C. Create API Notebooks and include them in the relevant Anypoint Exchange entries


D. Make relevant APIs discoverable via an Anypoint Exchange entry





C.
  Create API Notebooks and include them in the relevant Anypoint Exchange entries

Explanation

Correct Answer: Create API Notebooks and Include them in the relevant Anypoint exchange entries

*****************************************

API Notebooks are the one on Anypoint Platform that enable us to provide code-centric API documentation

Reference: [: https://docs.mulesoft.com/exchange/to-use-api-notebook, , Bottom of Form, Top of Form, , ]


Page 3 out of 10 Pages
Previous