Let's figure out how to solve the linear SDE with multiplicative noise
$$dX_t = \left(a_1(t)X_t + a_2(t)\right)dt + \left(b_1(t)X_t + b_2(t)\right)dW_t.$$
The tl;dr version is
$$X_t = \Phi_t\left(X_0 + \int_0^t\Phi_s^{-1}(a_2(s) - b_1(s)b_2(s))ds + \int_0^t\Phi_s^{-1}b_2(s)dW_s\right)$$
with
$$\Phi_t = e^{\int_0^t (a_1(s)-\frac{1}{2}b_1(s)^2) ds + \int_0^tb_1(s)dW_s}$$
But I want to show how to get there. If we attempted to go about this like last time, we would would look at the homogeneous equation
$$dX^h_t = a_1(t)X_t^h dt + b_1(t)X_t^hdW_t.$$
Dividing by $X_t^h$,
$$ \frac{dX^h_t}{X_t^h} = a_1(t) dt + b_1(t)dW_t.$$
Integrating,
$$ln(X_t^h) = \int_0^ta_1(s) ds + \int_0^tb_1(s)dW_s.$$
But this would be wrong. The trouble is that $dln(X_t^h) \neq a_1(t) dt + b_1(t)dW_t$. This is because, unlike the additive noise case, the multiplicative noise case ensures that even the homogeneous solution is a function of both time and $W_t$. Therefore, we need to use Itô's correction. In particular, notice that
$$dln(X_t^h) = \frac{\partial ln(X_t^h)}{\partial X_t^h}dX_t^h + \frac{1}{2}\frac{\partial^2 ln(X_t^h)}{\partial (X_t^h)^2}(dX_t^h)^2$$
If $X_t^h$ was only a function of time, then $(dX_t^h)^2 \sim O(dt^2)$, and hence ignorable. But with the extra $dW_t^2 = dt$ tucked in there, we need to take account of it. So, expanding the above for $dX^h_t = a_1(t)X_t dt + b_1(t)X_tdW_t$ and keeping the leading order terms, we get
$$ dln(X_t^h) = (a_1(t)-\frac{1}{2}b_1(t)^2) dt + b_1(t)dW_t$$
The extra $-\frac{1}{2}b_1(t)^2$ is Itô's correction to the differential. Now that we have the correct form of $dln(X_t^h)$, we can integrate and exponentiate to get
$$X_t^h = X_0e^{\int_0^t (a_1(s)-\frac{1}{2}b_1(s)^2) ds + \int_0^tb_1(s)dW_s}$$
This is our fundamental solution, which we will denote again by $\Phi_t$. Like before, $\Phi_t^{-1}$ is our integrating factor. So, we will utilize the same method as before: Evaluate and then integrate $d(X_t\Phi_t^{-1})$. Expect this time the $\Phi_t$ is a function of $W_t$, so the differential will need some care. So let's step back and evaluate the more general $d(Y_tZ_t)$, where
$$ dY_t = f_1(t,W_t)dt + g_1(t,W_t)dW_t $$
$$ dZ_t = f_2(t,W_t)dt + g_2(t,W_t)dW_t $$
Taking a Taylor's series expansion of $d(Y_tZ_t)$ gives
$$ d(Y_tZ_t) = \frac{\partial Z_tY_t}{\partial t}dt + \frac{\partial Z_tY_t}{\partial W_t}dW_t + \frac{1}{2}\frac{\partial^2 Z_tY_t}{\partial W_t^2}dt$$
Differentiating, expanding the differentials, collecting like terms, and removing higher order terms yields
$$d(Y_tZ_t) = \left(Z_tf_1 + Y_tf_2 + g_1g_2\right)dt + \left(Z_tg_1 + Y_tg_2\right)dW_t$$
You should try this for yourself to feel more comfortable with the derivation. It's a straight forward calculation. Coming back to the original problem, we know $dX_t$, so let's find $d\Phi_t^{-1}$.
$$ d\Phi_t^{-1} = \Phi_t^{-1}(b_1(t)^2-a_1(t))dt - \Phi_t^{-1}b_1(t)dW_t$$
Again, you should try applying Itô's formula to $\Phi_t^{-1}$ to get this. We now know the functions $f_1,f_2,g_1$, and $g_2$ from the differentials $dX_t$ and $d\Phi_t$. Solving is now a matter of plugging in and simplifying. After some more basic algebra we find
$$d(X_t\Phi_t^{-1}) = \Phi_t^{-1}(a_2(t) - b_1(t)b_2(t))dt + \Phi_t^{-1}b_2(t)dW_t$$
Integrating and then multiplying by $\Phi_t$ gives our solution
$$X_t = \Phi_t\left(X_0 + \int_0^t\Phi_s^{-1}(a_2(s) - b_1(s)b_2(s))ds + \int_0^t\Phi_s^{-1}b_2(s)dW_s\right)$$
with
$$\Phi_t = e^{\int_0^t (a_1(s)-\frac{1}{2}b_1(s)^2) ds + \int_0^tb_1(s)dW_s}$$
I'll end with an example. The equation for Geometric Brownian Motion is
$$dX_t = \mu X_t dt + \sigma X_t dW_t$$
Before we solve, let's get a feel for this equation. First, there are no forcing terms. The function is driven by where it is and the noise. Second, it has deterministic exponential growth or decay depending on the sign of $\mu$, which we can refer to as the growth rate. Finally, it has multiplicative noise that is proportional to the current position $X_t$ and a constant $\sigma$. We'll refer to $\sigma^2$ as the variance or volatility. Changes due to noise will be large when $X_t$ is large and small when $X_t$ is small. This is contrasted with the Langevin equation before, where the additive noise added on the same absolute magnitude of noise no matter the size of $X_t$. Solving, we get
$$X_t = X_0e^{(\mu-\frac{1}{2}\sigma^2)t + \sigma W_t}$$
Notice that if the deterministic rate of growth $\mu$ is smaller than half the variance, then we actually get a trend of decay, even with positive rate of growth $\mu$! You can think of this in terms of finance (where this equation is used extensively). If you have $\$$100 and lose 25 percent, you need 33.$\bar{33}$ percent gain to get back to where you started. Losses hit harder than gains grow. Even if you have a positive rate of return on an asset, high volatility can wipe you out. Say your rate of return is 1 percent. You get unlucky and lose 25 percent (26 percent due to volatility), then you need to get lucky with volatility the other way, not by 26 percent, but by $32.\bar{33}$ percent! That is, 1 percent from the deterministic rate of return and the extra $32.\bar{33}$ percent added for a total of $33.\bar{33}$ percent needed to get back to where you started. The odds are lop-sided and against you. An extreme example, I know, but it illustrates the point well.