Tech Art Sketchbook

Work In Progress / 05 August 2020

I've got a large folder of Orphans - UE4 experiments without a home. Most of these were made in a few hours, or a few days of spare time - a quiet Wednesday night here, a lazy Sunday afternoon there. I'd love to write a full breakdown for each of them, but each breakdown is a huge time investment, often longer than the project itself.

Some of these may get comprehensive writeups in the future, but I'm putting them here in the meantime.

Careful where you put your hands, friends. These experiments have some rough edges.

Subsurface / stylised gemstone material.

Lava shader / flowmaps

Generating a sphere in blueprint - nothing revolutionary (and really slow), but I'm glad I went through the process

Writing procedural glyphs to a render target

Dungeon generator - the pathfinding / connections don't work as well as I'd like (yet)


Tentacle-based procedural animation, inspired by the game Carrion


Material VFX - would look better with some additional layers, and particles. They're surprisingly cheap as is.

Sigil generator - reused some of the code from the Rune generator. As with all of these entries: needs more work.

That's all for now!

I'll try to post future ones in their own post. Let me know if there's any further information you'd like on any of these.

Most, if not all of these borrow from the work of others. I genuinely appreciate the people whose blogs and breakdowns I've learned from. I couldn't have made these without the breakdowns and tutorials of people like Taizyd Korambayil, Amit Patel, Oleg Dolya, Thomas Harle, and Simon Trümpler.

Isometric Camera in UE4

Tutorial / 04 June 2020


The preramble:

I've been terrible about posting new art on Artstation for the last few years. 
I can't post any of my work from my day job, and most of my weekend experiments have not been art - I've been working on improving my blueprints, shaders and design. Though they're valuable skills, they don't give me much I can post to my portfolio.

I've decided I want to start using these blogs to document some of my experiments. I hope that in breakdown / tutorial form, they'll be useful to someone out there. I've learned a lot from other people's breakdowns over the years.

This one started as most of my experiments do. "You know, I've never made..."

A simple Isometric Camera

I saw some nice isometric pixel art, and thought I'd like to try making an Isometric camera in UE4.

My first step was to visit the Wikipedia page for Isometric Projection:

Isometric projection is a method for visually representing three-dimensional objects in two dimensions in technical and engineering drawings. It is an axonometric projection in which the three coordinate axes appear equally foreshortened and the angle between any two of them is 120 degrees.

Here's the important part:

An isometric view of an object can be obtained by choosing the viewing direction such that the angles between the projections of the x, y, and z axes are all the same, or 120°. For example, with a cube, this is done by first looking straight towards one face. Next, the cube is rotated ±45° about the vertical axis, followed by a rotation of approximately 35.264° (precisely arcsin 1⁄√3 or arctan 1⁄√2, which is related to the Magic angle) about the horizontal axis.

In short: we need to Pitch the camera up by ~35.2°, then Yaw by 45°. Better yet, the article gives the precise formula to calculate the angle in question.

ArcSin (1 ÷  Square Root of 3) - I also could have just typed "35.264389"

And look, I learned something new!
The same thing can be achieved with a single Math Expression node.

(Please excuse all of the additional parentheses)

asind((1 / (sqrt(3)))) 

Let's take a look at that in context. I've got a Pawn object that contains a Default Scene Root, and a Camera. The position and rotation are initialised in the construction script.


  • I don't really need to Set the ArcSinAngle as a variable.
  • The angle is multiplied by negative 1 so it's looking down, rather than up
  • The Distance is an arbitrary value - I settled on a value of 100000.0 - or 1km. Not that it really matters, I eyeballed it.
  • The camera offset is 0,0,0 by default, but the starting location could be offset if I wished.

  The final parts of the construction script:

The FOV is also an arbitrary value. If I wanted to be truly accurate, I believe the FOV can be correctly calculated based off the distance of the camera to the target surface... I did not do that. A starting FOV of 1 looked good to me. Note that I'll be using the FOV to zoom the camera in and out later. The movement speed scale is a copy of the same code that'll run later in the event graph.  

With very little effort, there it was: an Isometric camera.

Clearly: I put a lot of effort into the art.

Note the lack of shadows in the above screenshot. The camera is physically far away, so some settings will need to be adjusted to fix this. I'll deal with that later.

Camera Movement

 Camera movement is largely what you'd expect it to be - take the input (I have mine configured to be both WSAD, and Gamepad Left Stick) and multiply by the Camera Move Speed - as I said earlier, I'm scaling the movement speed so that the camera moves faster when zoomed out, and slower when zoomed in. We'll look at the logic for that at the end of the zoom section.

The isometric camera operates on a 45° angle. The RotateVectorAroundAxis (I love this node) fixes that. It takes the Camera Rotation variable (45° by default, modified by rotation later), and rotates the movement vector appropriately.

Add the result of this to any existing Camera Offset value, and Update using a custom event, creatively named Update Camera Position.

I use a lot of custom events. It's a nice way to break up your blueprints - keeps them clean and readable. I could / should colour code my named comment boxes more regularly - I think that  could further help readability. Let's take a look at the Update Camera Position event:

As you can see, it's identical to the code in the construction script. I could just call this event in the construction script, given they do the same thing, but I think that makes the code more difficult to follow - jumping back and forth between Event Graph and Construction Script. I don't know if there is any performance difference for having the same code called twice. I'm sure it's negligible. Let me know if it's not! (I don't have any formal programming training, so I'm always happy to learn something new)

Rotating the Camera

I want the camera to smoothly rotate in 90° increments. Same as the movement, this will work in two parts: an initial input calculation, followed by an execution event. This one is a little more complex, so I'll explain it in more detail. 

Button press comes in, either the Gamepad Shoulder Buttons, or Q and E. Meets a Branch: Is the camera currently rotating?
Note: this boolean could be replaced with a Gate.

Let's come back to the True branch later, and follow the rest: 

  1. Set bIsRotating to True 
  2. Store the current rotation angle in the variable CameraRotationLast
  3. Add or Subtract 90° from the current rotation
  4. Call the Update event

If either of the camera rotation buttons are pressed while bIsRotating is True, I set an Integer QueueCameraRotDir to 1 or negative 1, depending on if it's CW or CCW (zero for no rotation), and call the QueueCameraRotate event.

Now, let's take a look at the UpdateCameraRotation event:

The event is called, I'm using a Timeline - 0.3 seconds, EaseIn-EaseOut... 

Can I take a moment to plug ? It's a terrific visualisation tool for ...easings.

The output of the timeline - a smooth transition from 0 to 1 powers the Alpha of my Lerp. The A and B values are CameraRotationLast and CameraRotation.

From there, I feed the output of that Lerp into a the Yaw of a Make Rotator node, and Set Relative Rotation.
I found it was important to run the UpdateCameraPosition execution event (from the camera move event) each tick during this camera move, to avoid an ugly lurch at the end of the animation.

When the timeline finishes its animation, I set bIsRotating to False, and run the CheckQueue event. I haven't explained that one yet. For that, we need to jump back to the sequence started back at the True fork of the bIsRotating branch.


In an attempt to make the camera controls feel more responsive, I've built in a queuing system. These three events run off the aforementioned True fork of the branch.

This accounts for any button presses that happen while the animation is still playing. If they happen in the last half of the animation (a 0.15 second window), it will trigger the animation as soon as the last one finishes.

  • QueueCameraRotate sets a timer. I've set it to 0.15 seconds (the full animation takes 0.3)
  • When that timer triggers the event, it resets the queued rotation back to 0 - ie. do nothing
  • When the timeline finishes and checks for queued presses, it will rotate in the stored direction

This is the kind of feature your users probably won't notice. But it is the kind of UX that your users might notice the absence of.

I thoroughly recommend Seth Coster's fantastic GDC talk Forgiveness Mechanics: Reading Minds for Responsive Gameplay on this topic. It articulates a number of ideas I was only just getting to grips with, and has certainly influenced how I approach UX. Anyone who has worked in VR can tell you the importance of good UX.

Nearly done! Let's take a look at the..


In order to zoom the camera in and out, I cheat a little. I'm really just changing the FOV. My understanding is that in doing so, I'm technically breaking true Isometric Projection. I'm not sure that the difference is distinguishable by human eyes.

  • It is worth noting: this logic will only work with the Gamepad Triggers - the mouse scroll wheel would require slightly different input handling. The trigger is a constant Float input (axis), where the scroll wheel is a series of  Input Actions.
  1. Input comes in from the left or right triggers, giving me a float between -1 and 1. 
  2. Multiply this by the CameraZoomSpeed (I told you it was making a return)
  3. Add the results of this multiply to the existing FOV
  4. Clamp it. Don't want the user zooming too near or far. I've got my limits at 0.2 and 1.8.
  5. Set the value as a variable, and Set the FOV in the camera 

This brings us to the final part of the graph! The zoom-derived movement speed scale.
It flows on from the end of Set Field of View:

By dividing FOV by ZoomMax, I get a float between 0 and 1, which I can plug into a Lerp, controlling the Camera Move Speed.
The Multiply by 1 node isn't necessary, but I was considering having an additional variable for even finer control over things.

And that's about it!

A final note on lighting and shadows

I've been playing with the shadow settings. For a movable directional light, I was having some issues - my results were sub-optimal. I had some luck messing with the Contact Shadow Length

I had far better results with baked Static lighting, though I'm happy to hear any advice you may have to offer on solving this issue.

One final test:

I wanted to try it out with some "real" art. So, I grabbed a few Megascans, and smashed them together (sloppily) to see what it looked like.

While I don't think this is quite ready to ship, I'm surprised at how well it turned out!

Thanks for reading this far!
This is my first breakdown / tutorial of this kind, so feedback is most appreciated.