I would like to know if there is a non hacky way to trigger a single workflow on any change made to a table (Create / Update / Delete record, the CUD or CRUD). I thought this would have already been discussed on this community blog, but I didn’t find a matching entry.
Imagine I have a Dashboard that I want to refresh each time a table “A” is updated. I create a workflow to add / update values in table D (dashboard table) and this would track the changes to table A + compute some aggregates using a CUD trigger.
Initial setup V0 - can only track Create & Update
Now, I can track Create + Update using a “Last Modified At” column and set a Workflow to trigger on “Updated Record”.
However, if I delete a record, this is not enough and doesn’t trigger the “Updated Record” event (no workflow triggered).
Updated setup V1 - can track Create & Update & Delete (with a 3rd table “System State” + 3 workflows)
I thought of creating a “System State” table which would track the number of rows in my table A and the latest (MAX) “Last Modified At” column of table A.
I would need 3 workflows:
W1. Track “Deleted Record” event of table A to update the number of rows and “Max_Last_Updated_At” values stored of System State
W2. Track “Updated Record” event of table A (same as V0) to perform the same update as workflow n° 1
W3. Track the “Updated Record” events of System State table in order to update table D (dashboard table).
The main benefit of this approach is that W1 and W2 are rather simple compared to W3, so I end up duplicating only simple workflows.
However, I was hoping and there would be a less hacky way of doing that.
Any suggestions ?
Side note: If you track with a 2nd automation the “Last Modified At” column of table D (the dashboard table), when a record is added, the “Last Modified At” may not be always updated, to make sure it gets update on any record creation you may need to set the “Last Modified At” colum to “Watch for updates on > Any fields (including readonly)”.
I’d probably implement a soft delete (flag) to trigger the update log (maybe with even more details: “Deleted”) and do the actual deletion right after that update?
Hi @Tuur,
Thanks for the suggestion. Your idea works and it is a good fix. One downside is that it may mix the tasks of updating table D and deleting a row in table A.
The benefit that I see with my “System State” table solution is that I can separate concerns in independent workflows . For instance, the workflow W1 tracking deletion in table A is independent from the successful update of table D with W3 - in other words, if table D update (W3) fails, then it doesn’t block the deletion of the record in table A (W1). Also, I am sure that W1 happens before W2.
This being said, even in the case of the soft delete, the user would not see a difference, since we would filter the loaded records from table A using the “soft delete” field. The main impacts are: 1) growth of number of rows in table A + 2) maintaining both automations in a single workflow.
A follow up question could be: what if you want to update 2 tables D and E after A is modified ? Which workflow would handle the soft delete ? How do we handle racing (when to apply the deletion after “soft delete” set to True with respect to the other processes)?
Hi! You could either trigger W2 after W1 (e.g. last step would be a call), merge them into one with conditions, make the deletion a separate timer activated workflow, or something else. There are usually many ways to solve something.
I wonder why you would want two separate ‘log’ tables (D+E) though…
Do you mean calling W2 after W1 using a webhook trigger in W2 and a “Call API” step in W1?
After more thought, I realised that my architecture was greedy in terms of action consumption (quickly hitting Softr paid plan limits). Using a soft_delete + periodic batch updates via Recurring Schedule Triggers is probably the way to go (with maybe a refresh button on the UI).
Regarding your question about the log table, if the goal is just to capture operations, you are right. Adding a “Last updated at” and a “soft delete” column on the main table, and using a standard “Record Updated” trigger for the workflow is enough.
However, my original rationale for the log table was to act as an asynchronous queue. It separates concerns so that updating the dashboard does not block the user’s CUD operations. Because a failed step in a workflow skips all subsequent steps, a monolithic workflow is risky. If the dashboard update step fails for any reason, having a decoupled log table ensures that the user’s actual data operation on Table A still succeeds. It also keeps the workflows smaller and easier to debug.