Total 234 Questions
Last Updated On : 30-Jun-2025
Preparing with Salesforce-MuleSoft-Developer-I practice test is essential to ensure success on the exam. This Salesforce SP25 test allows you to familiarize yourself with the Salesforce-MuleSoft-Developer-I exam questions format and identify your strengths and weaknesses. By practicing thoroughly, you can maximize your chances of passing the Salesforce certification spring 2025 release exam on your first attempt. Surveys from different platforms and user-reported pass rates suggest Salesforce-MuleSoft-Developer-I practice exam users are ~30-40% more likely to pass.
Refer to the exhibit.
What is the output payload in the On Complete phase
A. summary statistics with NO record data
B. The records processed by the last batch step: [StepTwol, StepTwo2, StepTwo3]
C. The records processed by all batch steps: [StepTwostepOnel, stepTwostepOne2, StepTwoStepOne3]
D. The original payload: [1,2,31
Explanation:
In Mule’s Batch Job, the On Complete phase is designed to emit a summary of execution rather than the transformed record payloads. By default, it collects and outputs statistics such as total records processed, number of successful or failed items, and any errors encountered. It doesn’t return the per-record payloads that were produced in your Batch_Step1 or Batch_Step2.
Even though each record was transformed to values like “StepTwoStepOne1”, the On Complete logger will show only the batch statistics object—not an array of those strings. If you inspect #[payload] in On Complete, you’ll see something like: { recordsProcessed: 3, recordsFailed: 0, // other summary fields… }
To capture the actual record outputs in On Complete, you must explicitly aggregate them during your batch steps (for example using a VM queue or a payload accumulator) and then reference that collection in On Complete. Otherwise, Mule purges the record details once the summary is emitted.
Refer to the exhibit.
What data is expected by the POST /accounts endpoint?
A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
The POST /accounts endpoint in the RAML specification expects a JSON payload with name, address, customer_since (and optionally other fields). Option D matches the exact structure shown in the RAML's example under post (minus id, which is server-generated).
Why Other Options Are Wrong:
A: Includes id (generated by server, not client input).
B: XML format (not JSON, as specified in RAML).
C: XML format + extra field (bank_agent_id) not in RAML example.
What is the difference between a subflow and a sync flow?
A. No difference
B. Subflow has no error handling of its own and sync flow does
C. Sync flow has no error handling of its own and subflow does
D. Subflow is synchronous and sync flow is asynchronous
Explanation:
In MuleSoft, Subflows and Synchronous flows (sync flows) are both reusable flow types, but they behave differently, especially regarding error handling. A Subflow is a lightweight flow that always runs synchronously and shares the error handling context of the calling flow — it cannot have its own error handler. This makes subflows ideal for simple, reusable logic that doesn't need custom error control.
On the other hand, a Sync Flow (also referred to as a regular flow with a flow-ref) can have its own error handling and can be reused like a subflow. Though it also executes synchronously when invoked via flow-ref, it allows for defining custom error handling logic within itself, making it more flexible for complex logic that may need isolation or custom fault tolerance.
This distinction is critical in designing robust Mule applications. If you need custom error handling for a reusable block of logic, a synchronous flow is the better choice. If the logic is straightforward and should share error handling with the parent, a subflow is lighter and easier to use.
Incorrect Options:
A. No difference – Incorrect; error handling behavior differs.
C. Sync flow has no error handling... – Sync flows can have their own error handlers.
D. Subflow is synchronous and sync flow is asynchronous – Both are synchronous by default.
Refer to the exhibits.
A web client sends a POST request to the HTTP Listener with the payload "Hello-". What response is returned to the web client?
What response is returned to the web client?
A. Hello- HTTP-] MS2-Three
B. HTTP-JMS2-Three
C. Helb-JMS1-HTTP-JMS2 -Three
D. Hello-HTTP-Three
Explanation:
The flow processes the initial payload ("Hello-") sequentially:
HTTP POST /data fails (simulated error), triggering the HTTP error handler, which appends "HTTP-".
JMS two then fails, triggering the JMS error handler, which appends "JMS2-".
Finally, the set-payload appends "Three".
Result: "Hello-HTTP-JMS2-Three" (Option A).
Why Other Options Are Wrong:
B: Missing initial "Hello-".
C: Incorrectly includes "JMS1-" (not triggered in this flow).
D: Missing "JMS2-" from the second error handler.
A Batch Job scope has five batch steps. An event processor throws an error in the second batch step because the input data is incomplete. What is the default behavior of the batch job after the error is thrown?
A. All processing of the batch job stops.
B. Event processing continues to the next batch step.
C. Error is ignored
D. Batch is retried
Explanation:
In case of an error , batch job completes in flight steps and stops further processing. MuleSoft Doc Ref : Handling Errors During Batch Job | MuleSoft Documentation The default is all processing will stop but we can change it by Max Failed Record field. General -> Max Failed Records: Mule has three options for handling a record-level error: Finish processing, Continue processing and Continue processing until the batch job accumulates a maximum number of failed records. This behavior can be controlled by Max Failed Records.
The default value is Zero which corresponds to Finish processing. The value -1, corresponds to Continue processing.
The value +ve integer, corresponds to Continue processing until the batch job accumulates a maximum number of failed records.
Refer to exhibits.
What message should be added to Logger component so that logger prints "The city is Pune" (Double quote should not be part of logged message)?
A. #["The city is" ++ payload.City]
B. The city is + #[payload.City]
C. The city is #[payload.City]
D. #[The city is ${payload.City}
Explanation: Correct answer is The city is #[payload.City] Answer can get confused with the option #["The city is" ++ payload.City] But note that this option will not print the space between is and city name. This will print The city isPune
Refer to the exhibits. The Set Payload transformer in the addltem child flow uses DataWeave to create an order object.
What is the correct DataWeave code for the Set Payload transformer in the createOrder flow to use the addltem child flow to add a router call with the price of 100 to the order?
A. lookup( "addltern", { price: "100", item: "router", itemType: "cable" } )
B. addltem( { payload: { price: "100", item: "router", itemType: "cable" > } )
C. lookup( "addltem", { payload: { price: "100", item: "router", itemType: "cable" } > )
D. addltem( { price: "100", item: "router", itemType: "cable" } )
Explanation:
dataweave
%dw 2.0
output application/json
---
lookup("addItem", { payload: { price: "100", item: "router", itemType: "cable" } })
Why:
lookup( moduleName, args ) invokes the child flow named addItem.
Mule wraps your map in a payload key when calling child flows—so your input map must live under payload.
The DataWeave code matches the child flow’s expected payload.item, payload.itemType, and payload.price fields.
Other options fail because they either:
Don’t wrap under payload (A, D).
Use the wrong flow name or stray braces (B).
Which Mule component provides a real-time, graphical representation of the APIs and mule applications that are running and discoverable?
A. API Notebook
B. Runtime Manager
C. Anypoint Visualizer
D. API Manager
Explanation:
Anypoint Visualizer is the MuleSoft component that provides a real-time, graphical visualization of APIs, Mule applications, and the way they interact within your ecosystem. It shows data flow between services, helps detect anomalies, and supports architecture governance by mapping service dependencies. Visualizer pulls live data from Runtime Manager and API gateways, displaying communication paths and dependencies.
This tool is especially useful for observability in complex environments where multiple APIs and applications interact. You can filter views by layer (Experience, Process, System), environment, or policy compliance. This real-time map enables architects and ops teams to quickly understand and troubleshoot live systems.
Unlike other tools in Anypoint Platform, Visualizer focuses on topology and live traffic, not just configuration or management. It’s ideal for identifying service chokepoints, unauthorized connections, or understanding microservice architecture health in production.
Incorrect Options:
A. API Notebook – Used for interactive API testing and documentation, not visualization.
B. Runtime Manager – Manages deployments and logs; no visual mapping.
D. API Manager – Manages API policies and access control, not topology mapping.
A Mule project contains a DataWeave module file WebStore dvA that defines a function named loginUser The module file is located in the projects src/main/resources/libs/dw folder
What is correct DataWeave code to import all of the WebStore.dwl file's functions and then call the loginUser function for the login "cindy.park@example.com"?
A. Option A
B. Option B
C. Option C
D. Option D
Explanation:
You need to import all functions from the WebStore.dwl module under src/main/resources/libs/dw. The correct syntax uses the from keyword with the module’s namespace libs::dw::WebStore. Then you can call loginUser directly without qualifying it:
%dw 2.0
output application/json
import * from libs::dw::WebStore
---
loginUser("cindy.park@example.com")
This brings every function in WebStore.dwl into the script’s scope and lets you call loginUser directly.
Options A and C use incorrect import paths or syntax, while B qualifies the function call instead of making it directly available.
Which of the below functionality is provided by zip operator in DataWeave?
A. Merges elements of two lists (arrays) into a single list
B. Used for sending attachments
C. Minimize the size of long text using encoding.
D. All of the above
Explanation:
The zip operator in DataWeave combines elements from two arrays/lists into a single list of pairs (or tuples). For example:
[1, 2] zip ["a", "b"] // Output: [[1, "a"], [2, "b"]]
This matches Option A.
Why Other Options Are Wrong:
B (Attachments): Unrelated—zip doesn’t handle MIME attachments.
C (Text compression): zip doesn’t encode/minimize data.
D (All): Incorrect, as only Option A is valid.
Page 4 out of 24 Pages |
Salesforce-MuleSoft-Developer-I Practice Test Home | Previous |