Posted
almost 13 years
ago
by
Release Notes v1.7
It's time for another minor release bump in Spark as we begin to support a different kind of syntax called Jade. You can read up more about Jade at
http://jade-lang.com/
To start using it, simply add a reference to Spark and
... [More]
start adding *.shade files to your project and as long as Spark is registered as a view engine, then it should automatically attempt Jade parsing to HTML. That of course, means that you can mix and match
.spark and .shade files and the parser will work just fine with both of them. You can even create .shade partials that can be consumed by .spark views.
It should be noted that this is only the provisional support and point releases will be made as an when bugs are discovered to work out the kinks. If you find any issues, please don't hesitate to raise them on
github preferably
This feature is very new, but for now you can see an example of Jade syntax usage in these places:
Some Razor views compared with Jade views:
https://gist.github.com/2371061 Used on the OWIN/Gate project with Sake for builds:
https://github.com/loudej/gate/tree/makefiles/src/build The new Sake library (think make for C#) itself based upon the Spark View Engine:
https://github.com/loudej/sake/tree/master/src/Sake.Library/Shared
The make file on the OWIN/gate project:
https://github.com/loudej/gate/blob/makefiles/makefile.shade
Other fixes since the v1.6 release include:
JS compiler bug for toString() occasionally invoked on null and adding ~ to JS view paths
Bug fix for Area support in finding layouts
[Less]
|
Posted
almost 13 years
ago
by
RobertGreyling
Release Notes v1.7
It's time for another minor release bump in Spark as we begin to support a different kind of syntax called Jade. You can read up more about Jade at http://jade-lang.com/To start using it, simply add a reference to Spark and start
... [More]
adding *.shade files to your project and as long as Spark is registered as a view engine, then it should automatically attempt Jade parsing to HTML. That of course, means that you can mix and match .spark and .shade files and the parser will work just fine with both of them. You can even create .shade partials that can be consumed by .spark views.It should be noted that this is only the provisional support and point releases will be made as an when bugs are discovered to work out the kinks. If you find any issues, please don't hesitate to raise them on github preferablyThis feature is very new, but for now you can see an example of Jade syntax usage in these places:
Some Razor views compared with Jade views: https://gist.github.com/2371061
Used on the OWIN/Gate project with Sake for builds: https://github.com/loudej/gate/tree/makefiles/src/build
The new Sake library (think make for C#) itself based upon the Spark View Engine: https://github.com/loudej/sake/tree/master/src/Sake.Library/Shared
The make file on the OWIN/gate project: https://github.com/loudej/gate/blob/makefiles/makefile.shade
Other fixes since the v1.6 release include:
JS compiler bug for toString() occasionally invoked on null and adding ~ to JS view paths
Bug fix for Area support in finding layouts [Less]
|
Posted
almost 13 years
ago
by
RobertGreyling
Release Notes v1.7
It's time for another minor release bump in Spark as we begin to support a different kind of syntax called Jade. You can read up more about Jade at http://jade-lang.com/To start using it, simply add a reference to Spark and start
... [More]
adding *.shade files to your project and as long as Spark is registered as a view engine, then it should automatically attempt Jade parsing to HTML. That of course, means that you can mix and match .spark and .shade files and the parser will work just fine with both of them. You can even create .shade partials that can be consumed by .spark views.It should be noted that this is only the provisional support and point releases will be made as an when bugs are discovered to work out the kinks. If you find any issues, please don't hesitate to raise them on github preferablyThis feature is very new, but for now you can see an example of Jade syntax usage in these places:
Some Razor views compared with Jade views: https://gist.github.com/2371061
Used on the OWIN/Gate project with Sake for builds: https://github.com/loudej/gate/tree/makefiles/src/build
The new Sake library (think make for C#) itself based upon the Spark View Engine: https://github.com/loudej/sake/tree/master/src/Sake.Library/Shared
The make file on the OWIN/gate project: https://github.com/loudej/gate/blob/makefiles/makefile.shade
Other fixes since the v1.6 release include:
JS compiler bug for toString() occasionally invoked on null and adding ~ to JS view paths
Bug fix for Area support in finding layouts [Less]
|
Posted
almost 13 years
ago
by
RobertGreyling
Release Notes v1.7
It's time for another minor release bump in Spark as we begin to support a different kind of syntax called Jade. You can read up more about Jade at http://jade-lang.com/To start using it, simply add a reference to Spark and start
... [More]
adding *.shade files to your project and as long as Spark is registered as a view engine, then it should automatically attempt Jade parsing to HTML. That of course, means that you can mix and match .spark and .shade files and the parser will work just fine with both of them. You can even create .shade partials that can be consumed by .spark views.It should be noted that this is only the provisional support and point releases will be made as an when bugs are discovered to work out the kinks. If you find any issues, please don't hesitate to raise them on github preferablyThis feature is very new, but for now you can see an example of Jade syntax usage in these places:
Some Razor views compared with Jade views: https://gist.github.com/2371061
Used on the OWIN/Gate project with Sake for builds: https://github.com/loudej/gate/tree/makefiles/src/build
The new Sake library (think make for C#) itself based upon the Spark View Engine: https://github.com/loudej/sake/tree/master/src/Sake.Library/Shared
The make file on the OWIN/gate project: https://github.com/loudej/gate/blob/makefiles/makefile.shade
Other fixes since the v1.6 release include:
JS compiler bug for toString() occasionally invoked on null and adding ~ to JS view paths
Bug fix for Area support in finding layouts
[Less]
|
Posted
over 13 years
ago
by
RobertGreyling
Release Notes 1.6
There are two fixes that I've introduced into Spark v1.6 which address the following sticking points:
Attributes surrounded with single quotes with double quotes inside will now be preserved as per your input.
For example:
... [More]
<tag attr='something; other="value1, value2"' />
results in exactly the same output HTML when it used to substitute single quote on the outside with double quotes incorrectly. This never used to be an issue a few years ago, but with new libraries taking advantage of HTML5 syntax, attributes like the one above are becoming very common and Spark just got in the way. Anyway, this is fixed now, but if any of your previous views were "relying" on Spark swapping those to double quotes for you (though I can't imagine why), then you may want to retest those views.The second thing that is happening is a number of JavaScript frameworks jumping on the whole ${ } syntax bandwagon like jQuery Templates for example. So naturally will try and parse these and expand the variables in to the rendered output and folks have been asking for a way to escape these so that they are not parsed, and left alone for the JS library to consume.So there are now two ways you can escape the Spark code expressions:
Single item escaping (github commit)
Entire block escaping (github commit)
Single Item escaping
This can be done by using one of three escape chars for each expression type. Rather than explain, the following is an example of how you would do this in your spark view:
<div>
$${Encoded.Escaped.with.a.dollar < 0}
\${Encoded.Escaped.with.a.backslash < 0}
`${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!!{Unencoded.Escaped.with.a.dollar < 0}
\!{Unencoded.Escaped.with.a.backslash < 0}
`!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
\$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
`$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
results in the following verbatim output:
<div>
${Encoded.Escaped.with.a.dollar < 0}
${Encoded.Escaped.with.a.backslash < 0}
${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!{Unencoded.Escaped.with.a.dollar < 0}
!{Unencoded.Escaped.with.a.backslash < 0}
!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
Now I know that currently the $ is the popular delimiter, but all we need is for one, er...visionary, to decide to use the exclamation point (probably already have) and we'd be in the same boat, so this code change takes that into account as well.
Entire block escaping
This is already in the current version, but worth mentioning again. The second way you can do this is thanks to Mike Murray who introduced the new <ignore> special tag for Spark and you use to output verbatim anything that is inside like so:
<html>
<head>
<title>ignore test</title>
</head>
<body>
<h1>ignore test</h1>
<p>${System.DateTime.Now}</p>
<ignore>
<div>
Regular text ${This.isnt.code < 0}
<var dummy="This isn't a variable" />
</div>
</ignore>
</body>
</html>
Lastly, as of Spark v1.6, it is now fully Mono compliant - well, there's the VB bits, but I'm not counting those :) - thanks to some awesome community efforts by Alex and Jaime! And thanks to them and Josh, we also have a first class shining implementation of Spark now in the FubuMVC framework that really is better than I ever could have hoped for.Please reach out and thank these guys, they've done some spectacular work here in their own spare time and it's hugely appreciated. [Less]
|
Posted
over 13 years
ago
by
RobertGreyling
Release Notes 1.6
There are two fixes that I've introduced into Spark v1.6 which address the following sticking points:
Attributes surrounded with single quotes with double quotes inside will now be preserved as per your input.
For example:
... [More]
results in exactly the same output HTML when it used to substitute single quote on the outside with double quotes incorrectly. This never used to be an issue a few years ago, but with new libraries taking advantage of HTML5 syntax, attributes like the one above are becoming very common and Spark just got in the way. Anyway, this is fixed now, but if any of your previous views were "relying" on Spark swapping those to double quotes for you (though I can't imagine why), then you may want to retest those views.The second thing that is happening is a number of JavaScript frameworks jumping on the whole ${ } syntax bandwagon like jQuery Templates for example. So naturally will try and parse these and expand the variables in to the rendered output and folks have been asking for a way to escape these so that they are not parsed, and left alone for the JS library to consume.So there are now two ways you can escape the Spark code expressions:
Single item escaping (github commit)
Entire block escaping (github commit)
Single Item escaping
This can be done by using one of three escape chars for each expression type. Rather than explain, the following is an example of how you would do this in your spark view:
$${Encoded.Escaped.with.a.dollar < 0}
\${Encoded.Escaped.with.a.backslash < 0}
`${Encoded.Escaped.with.a.backtick < 0}
!!{Unencoded.Escaped.with.a.dollar < 0}
\!{Unencoded.Escaped.with.a.backslash < 0}
`!{Unencoded.Escaped.with.a.backtick < 0}
$$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
\$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
`$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
results in the following verbatim output:
${Encoded.Escaped.with.a.dollar < 0}
${Encoded.Escaped.with.a.backslash < 0}
${Encoded.Escaped.with.a.backtick < 0}
!{Unencoded.Escaped.with.a.dollar < 0}
!{Unencoded.Escaped.with.a.backslash < 0}
!{Unencoded.Escaped.with.a.backtick < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
Now I know that currently the $ is the popular delimiter, but all we need is for one, er...visionary, to decide to use the exclamation point (probably already have) and we'd be in the same boat, so this code change takes that into account as well.
Entire block escaping
This is already in the current version, but worth mentioning again. The second way you can do this is thanks to Mike Murray who introduced the new special tag for Spark and you use to output verbatim anything that is inside like so:
ignore test
ignore test
${System.DateTime.Now}
Regular text ${This.isnt.code < 0}
Lastly, as of Spark v1.6, it is now fully Mono compliant - well, there's the VB bits, but I'm not counting those :) - thanks to some awesome community efforts by Alex and Jaime! And thanks to them and Josh, we also have a first class shining implementation of Spark now in the FubuMVC framework that really is better than I ever could have hoped for.Please reach out and thank these guys, they've done some spectacular work here in their own spare time and it's hugely appreciated. [Less]
|
Posted
over 13 years
ago
by
Release Notes 1.6
There are two fixes that I've introduced into Spark v1.6 which address the following sticking points:
Attributes surrounded with single quotes with double quotes inside will now be preserved as per your input.
For example:
... [More]
<tag attr='something; other="value1, value2"' />
results in exactly the same output HTML when it used to substitute single quote on the outside with double quotes incorrectly. This never used to be an issue a few years ago, but with new libraries taking advantage of HTML5 syntax, attributes like the one above
are becoming very common and Spark just got in the way. Anyway, this is fixed now, but if any of your previous views were "relying" on Spark swapping those to double quotes for you (though I can't imagine why), then you may want to retest those
views.
The second thing that is happening is a number of JavaScript frameworks jumping on the whole ${ } syntax bandwagon like jQuery Templates for example. So naturally will try and parse these and expand the variables in to the rendered output and folks have been
asking for a way to escape these so that they are not parsed, and left alone for the JS library to consume.
So there are now two ways you can escape the Spark code expressions:
Single item escaping (github commit) Entire block escaping (github commit)
Single Item escaping
This can be done by using one of three escape chars for each expression type. Rather than explain, the following is an example of how you would do this in your spark view:
<div>
$${Encoded.Escaped.with.a.dollar < 0}
\${Encoded.Escaped.with.a.backslash < 0}
`${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!!{Unencoded.Escaped.with.a.dollar < 0}
\!{Unencoded.Escaped.with.a.backslash < 0}
`!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
\$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
`$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
results in the following verbatim output:
<div>
${Encoded.Escaped.with.a.dollar < 0}
${Encoded.Escaped.with.a.backslash < 0}
${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!{Unencoded.Escaped.with.a.dollar < 0}
!{Unencoded.Escaped.with.a.backslash < 0}
!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
Now I know that currently the $ is the popular delimiter, but all we need is for one, er...visionary, to decide to use the exclamation point (probably already have) and we'd be in the same boat, so this code change takes that into account as well.
Entire block escaping
This is already in the current version, but worth mentioning again. The second way you can do this is thanks to Mike Murray who introduced the new <ignore> special tag for Spark and you use to output verbatim anything that is inside like so:
<html>
<head>
<title>ignore test</title>
</head>
<body>
<h1>ignore test</h1>
<p>${System.DateTime.Now}</p>
<ignore>
<div>
Regular text ${This.isnt.code < 0}
<var dummy="This isn't a variable" />
</div>
</ignore>
</body>
</html>
Lastly, as of Spark v1.6, it is now fully Mono compliant - well, there's the VB bits, but I'm not counting those :) - thanks to some awesome community efforts by Alex and Jaime! And thanks to them and Josh, we also have a first class shining
implementation of Spark now in the FubuMVC framework that really is better than I ever could have hoped for.
Please reach out and thank these guys, they've done some spectacular work here in their own spare time and it's hugely appreciated.
[Less]
|
Posted
over 13 years
ago
by
RobertGreyling
Release Notes 1.6
There are two fixes that I've introduced into Spark v1.6 which address the following sticking points:
Attributes surrounded with single quotes with double quotes inside will now be preserved as per your input.
For example:
... [More]
<tag attr='something; other="value1, value2"' />
results in exactly the same output HTML when it used to substitute single quote on the outside with double quotes incorrectly. This never used to be an issue a few years ago, but with new libraries taking advantage of HTML5 syntax, attributes like the one above are becoming very common and Spark just got in the way. Anyway, this is fixed now, but if any of your previous views were "relying" on Spark swapping those to double quotes for you (though I can't imagine why), then you may want to retest those views.The second thing that is happening is a number of JavaScript frameworks jumping on the whole ${ } syntax bandwagon like jQuery Templates for example. So naturally will try and parse these and expand the variables in to the rendered output and folks have been asking for a way to escape these so that they are not parsed, and left alone for the JS library to consume.So there are now two ways you can escape the Spark code expressions:
Single item escaping (github commit)
Entire block escaping (github commit)
Single Item escaping
This can be done by using one of three escape chars for each expression type. Rather than explain, the following is an example of how you would do this in your spark view:
<div>
$${Encoded.Escaped.with.a.dollar < 0}
\${Encoded.Escaped.with.a.backslash < 0}
`${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!!{Unencoded.Escaped.with.a.dollar < 0}
\!{Unencoded.Escaped.with.a.backslash < 0}
`!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
\$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
`$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
results in the following verbatim output:
<div>
${Encoded.Escaped.with.a.dollar < 0}
${Encoded.Escaped.with.a.backslash < 0}
${Encoded.Escaped.with.a.backtick < 0}
</div>
<div>
!{Unencoded.Escaped.with.a.dollar < 0}
!{Unencoded.Escaped.with.a.backslash < 0}
!{Unencoded.Escaped.with.a.backtick < 0}
</div>
<div>
$!{Encoded.Silent.Nulls.Escaped.with.a.dollar < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backslash < 0}
$!{Encoded.Silent.Nulls.Escaped.with.a.backtick < 0}
</div>
Now I know that currently the $ is the popular delimiter, but all we need is for one, er...visionary, to decide to use the exclamation point (probably already have) and we'd be in the same boat, so this code change takes that into account as well.
Entire block escaping
This is already in the current version, but worth mentioning again. The second way you can do this is thanks to Mike Murray who introduced the new <ignore> special tag for Spark and you use to output verbatim anything that is inside like so:
<html>
<head>
<title>ignore test</title>
</head>
<body>
<h1>ignore test</h1>
<p>${System.DateTime.Now}</p>
<ignore>
<div>
Regular text ${This.isnt.code < 0}
<var dummy="This isn't a variable" />
</div>
</ignore>
</body>
</html>
Lastly, as of Spark v1.6, it is now fully Mono compliant - well, there's the VB bits, but I'm not counting those :) - thanks to some awesome community efforts by Alex and Jaime! And thanks to them and Josh, we also have a first class shining implementation of Spark now in the FubuMVC framework that really is better than I ever could have hoped for.Please reach out and thank these guys, they've done some spectacular work here in their own spare time and it's hugely appreciated. [Less]
|
Posted
almost 14 years
ago
by
Release Notes 1.5.1
Minor changes including:
Removal of the class constraint on strongly typed Models in Spark Views which means you can now also use a value type for the viewmodel.
ViewBag support for MVC3
Release Notes 1.5
There have been
... [More]
a lot of minor changes going on since version 1.1, but most important to note are the major changes which include:
Support for HTML5 "section" tag. Spark has now renamed its own section tag to "segment" instead to avoid clashes. You can still use "section" in a Spark sense for legacy support by specifying ParseSectionAsSegment = true if
needed while you transition Bindings - this is a massive feature that further simplifies your views by giving you a powerful way to move code out of the view while maintaining an html look. You can read a
blog post on the topic and the
official documentation The default output encoding is now specifically UTF8 instead of an unknown "Default", and you can configure it under the "system.web/globalization" section in you web.config and Spark will read from there.
Bug fix for empty <use master="" /> tag which was not recognised before and would just us the default master page
Bug fix for compiler warnings that were treated as errors during a batch compile process. These are now ignored
The batch compiler can now select dynamically between v3.5 and v4.0 of the .NET Framework depending on which is already loaded
Spark now supports a <markdown> tag within which you can write markdown (just like on Stack Overflow) which will be rendered out to html.
There are 3 main releases the you can use depending on your project:
Working on the latest stuff? Get the version for MVC3 which targets .NET 4 Latest .NET Framework yes, but MVC3 too bleeding edge, or got a big investment in MVC2? Then get the MVC2 version which targets .NET 4
Part of that large group of devs who are still on .NET 3.5? Go ahead and get the MVC2 version that targets .NET 3.5
You can download this release here or simply get it via NuGet as a reference on your project from within Visual Studio 2010.
If you have any questions, then feel free to ask on
Stack Overflow Spark-View-Engine tag, or on the
Google Group email list.
[Less]
|
Posted
almost 14 years
ago
by
RobertGreyling
Release Notes 1.5.1
Minor changes including:
Removal of the class constraint on strongly typed Models in Spark Views which means you can now also use a value type for the viewmodel.
ViewBag support for MVC3
Release Notes 1.5
There have been a lot
... [More]
of minor changes going on since version 1.1, but most important to note are the major changes which include:
Support for HTML5 "section" tag. Spark has now renamed its own section tag to "segment" instead to avoid clashes. You can still use "section" in a Spark sense for legacy support by specifying ParseSectionAsSegment = true if needed while you transition
Bindings - this is a massive feature that further simplifies your views by giving you a powerful way to move code out of the view while maintaining an html look. You can read a blog post on the topic and the official documentation
The default output encoding is now specifically UTF8 instead of an unknown "Default", and you can configure it under the "system.web/globalization" section in you web.config and Spark will read from there.
Bug fix for empty <use master="" /> tag which was not recognised before and would just us the default master page
Bug fix for compiler warnings that were treated as errors during a batch compile process. These are now ignored
The batch compiler can now select dynamically between v3.5 and v4.0 of the .NET Framework depending on which is already loaded
Spark now supports a <markdown> tag within which you can write markdown (just like on Stack Overflow) which will be rendered out to html.
There are 3 main releases the you can use depending on your project:
Working on the latest stuff? Get the version for MVC3 which targets .NET 4
Latest .NET Framework yes, but MVC3 too bleeding edge, or got a big investment in MVC2? Then get the MVC2 version which targets .NET 4
Part of that large group of devs who are still on .NET 3.5? Go ahead and get the MVC2 version that targets .NET 3.5
You can download this release here or simply get it via NuGet as a reference on your project from within Visual Studio 2010.If you have any questions, then feel free to ask on Stack Overflow Spark-View-Engine tag, or on the Google Group email list. [Less]
|